Models systematically drop enhancement features when processing semantic noise
When models encounter semantic noise, they maintain the core workflow (Search → Check → Reserve) but systematically drop "nice-to-have" features like:
This is statistically significant (p=0.028) and shows models prioritize essential tasks under cognitive load.
This demonstrates that LLMs have an implicit task hierarchy. When processing becomes more complex due to noise, they shed non-essential features while preserving core functionality - just like humans under stress.