Advisory & Applied Work does not begin as a consulting service, but as a shared interpretive effort.
The starting point is not a problem to be solved, but a situation: a context where large language models are already present, already shaping practices, and raising questions.
​
This page is for those who want not only to operate AI systems, but to understand how LLMs transform thinking, decision-making, and structures of responsibility in organizational, educational, or innovation settings.
Interpretive advisory
Interpretive advisory does not deliver solutions, but frames for thinking.
It focuses on how LLMs appear within organizational or communal practices, what meanings they acquire, and what implicit decisions they shape.
​
This work is often slower than technological implementation - but deeper and more durable.
Human-centered AI sense-making
Introducing LLMs is not only a technical task, but an interpretive challenge.
People do not simply “use” these systems; they enter into relations with them: questioning, relying on, resisting, or negotiating their outputs.
​
This section supports processes through which AI becomes understandable, discussable, and responsibly usable.
Organizational reflection
LLMs often enter organizations informally and experimentally.
This creates invisible practices, parallel solutions, and unspoken norms.
​
Organizational reflection aims to surface and interpret these practices before they solidify into norms or risks.
Education & dialogue
A key element of applied work is shared learning.
Through workshops, conversations, and educational formats, experiences with LLMs become not isolated knowledge, but shared and discussable practices.
​
The emphasis is not on instruction, but on dialogue.
