There is an exciting paper (Sept. 2022) with Google participation showing that compositionality in the design of LLM prompts can improve the semantic capabilities of LLM’s significantly!
The study shows how an LLM trained to decompose natural language prompt queries into their grammatical constituents provides much better results in generating SPARQL translations of these queries. Their technique, called dynamic least-to-most prompting, is demonstrated for the demanding CFQ dataset in the domain of movie production.
This is exactly what you can do with TextVerstehen training and test data: Teach your language model to decompose natural language input linguistically! ^