Skip to content
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 19 additions & 1 deletion pyiceberg/expressions/visitors.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,9 @@
# specific language governing permissions and limitations
# under the License.
import math
import threading
from abc import ABC, abstractmethod
from collections.abc import Callable
from collections.abc import Callable, Hashable
from functools import singledispatch
from typing import (
Any,
Expand All @@ -25,6 +26,9 @@
TypeVar,
)

from cachetools import LRUCache, cached
from cachetools.keys import hashkey

from pyiceberg.conversions import from_bytes
from pyiceberg.expressions import (
AlwaysFalse,
Expand Down Expand Up @@ -1970,6 +1974,20 @@ def residual_for(self, partition_data: Record) -> BooleanExpression:
return self.expr


_DEFAULT_RESIDUAL_EVALUATOR_CACHE_SIZE = 128
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 128? I think this is pretty high, and would probably go a bit lower (32?)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I probably should have put this in my initial PR reasoning; my guiding star here was that this feature should lean toward performance safety rather than super-tight memory tuning.

Residual evaluators are on the hot path for pruning, so if we miss the cache we end up rebinding expressions — and that’s relatively expensive in Python. A cache size of 128 lines up with common LRU defaults, and in practice it helps cut down query time and unnecessary I/O.

In my own experience, PyIceberg usually runs on instances with plenty of RAM (multiple GBs), so using a bit more memory to get more predictable performance feels like a good trade-off. I’ll also acknowledge that this comes from my experience, so there may be some bias there — but I think it’s a reasonable default for most real-world workloads. I'm happy to adjust if you feel strongly, but maybe we go with 64?



def _residual_evaluator_cache_key(
spec: PartitionSpec, expr: BooleanExpression, case_sensitive: bool, schema: Schema
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not pass in spec_id and schema_id here?

) -> tuple[Hashable, ...]:
return hashkey(spec.spec_id, repr(expr), case_sensitive, schema.schema_id)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Building the repr of the expr is super expensive. I think it would make more sense to implement __hash__ on the Expression?



@cached(
cache=LRUCache(maxsize=_DEFAULT_RESIDUAL_EVALUATOR_CACHE_SIZE),
key=_residual_evaluator_cache_key,
lock=threading.RLock(),
)
def residual_evaluator_of(
spec: PartitionSpec, expr: BooleanExpression, case_sensitive: bool, schema: Schema
) -> ResidualEvaluator:
Expand Down