compiler: fix halo placement for non out dim exchange#2892
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2892 +/- ##
==========================================
- Coverage 83.48% 83.46% -0.02%
==========================================
Files 248 248
Lines 51551 51570 +19
Branches 4437 4441 +4
==========================================
+ Hits 43038 43045 +7
- Misses 7761 7774 +13
+ Partials 752 751 -1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
27a8ff6 to
5178529
Compare
b802a82 to
b4d0594
Compare
| np.random.seed(0) | ||
| v = TimeFunction(name="v", grid=grid, space_order=4, | ||
| time_order=1, save=Buffer(1)) | ||
| v.data[:] = np.random.randn(*grid.shape) |
There was a problem hiding this comment.
Would it be worth using a sinusoidal function here and comparing to the analytical solution? Would need a bigger grid however, and the existing check is already pretty strong
There was a problem hiding this comment.
not a big fan of randn either
There was a problem hiding this comment.
It's random with a seed. I tend to prefer random values for tests, forces things to be more stable but can change it if makes people uncomfortable
3a8c97a to
ca423ac
Compare
| dims.add(ai.parent) | ||
| else: | ||
| dims.add(ai) | ||
| elif ai is not None and not ai._defines & set(f.dimensions): |
There was a problem hiding this comment.
most importantly, I don't understand why the prior if wouldn't catch this? if ai is rrecx and it's being used to index into say x, then d should be x and f.grid.is_distributed(d) in the condition above should have returned True already... no? what am I missing here?
There was a problem hiding this comment.
That's from when i tried to fix is_distributed to actually check if it's decomposed but it broke a lot of stuff so had to revert it. But right now is_distributed is meaningless since it always returns true.
| fmapper[f] = fmapper.get(f, hse).merge(hse) | ||
| return HaloScheme.build(fmapper, self.honored) | ||
|
|
||
| def _is_iter_carried(self, scope): |
There was a problem hiding this comment.
why did you hoist this? it doesn't feel right that data dependence analysis belong to HaloScheme, but again, I may be missing something
There was a problem hiding this comment.
I moved it here because I was checking it it would be enough to use in the communication pass. There is no strong reason to but it makes it potentially esier to use if needed in a different cluster pass instead of having to improt it form iet
| for f in hs.fmapper: | ||
| for a in c.scope.getwrites(f): | ||
| is_write = set(a.access.indices) & loc_vals | ||
| is_dist = any(c.grid.distributor.topology.get(d, 1) > 1 |
There was a problem hiding this comment.
any(f.grid.is_distributed(d) for d in hs.dimensions)
NOTE: f. and not c. is slightly more robust because not all Clusters may have an underlying Grid
There was a problem hiding this comment.
No that the main issue is_distributed is useless as it always returns true for a grid dimension irrespective of it it's decomposed or not
| for f in hs.fmapper: | ||
| for a in c.scope.getwrites(f): | ||
| is_write = set(a.access.indices) & loc_vals | ||
| is_dist = any(c.grid.distributor.topology.get(d, 1) > 1 |
There was a problem hiding this comment.
you can hoist this condition within the f loop and return False immediately if it doesn't hold;
then you can turn the current is_write condition into a one-liner if any(set(...) for a in ...)
| @pytest.mark.skipif(TestTTI is None, reason="Requires installing the tests") | ||
| @pytest.mark.parallel(mode=1) | ||
| def test_halo_structure(self, mode): | ||
| from test_dse import TestTTI |
There was a problem hiding this comment.
from .test_dse import TestTTI
or it doesn't matter? I will never remember...
There was a problem hiding this comment.
I think both work? THis is just moved from a top import becuase somehow importing it at the top runs all the tests in TestTTI at import
| np.random.seed(0) | ||
| v = TimeFunction(name="v", grid=grid, space_order=4, | ||
| time_order=1, save=Buffer(1)) | ||
| v.data[:] = np.random.randn(*grid.shape) |
There was a problem hiding this comment.
not a big fan of randn either
ca423ac to
f3fcfaa
Compare
f3fcfaa to
86fa36f
Compare
8a227ea to
595e27c
Compare
595e27c to
076af37
Compare
No description provided.