Iterative improvements from feedback is a general approach for many, if not all, successful systems.Ground-truth-in-the-loop is critical.Language models (LMs) like ChatGPT are phenomenal, however, there are still issues like hallucinations and a lack of planning and controllability.We may leverage LMs' competence of language to handle tasks by prompting, fine-tuning, and augmenting with tools and APIs.AI aims for optimality.(Current) LMs are approximations, thus induce an LM-to-real gap. Our aim is to bridge such a gap.Previous study shows that grounding, agency and interaction are the cornerstone for sound and solid LMs.Iterative improvements from feedback is critical for further progress of LMs and reinforcement learning is a promising framework, although pre-training then fine-tuning is a popular approach.Iterative updates are too expensive for monolithic large LMs, thus smaller LMs are desirable.A modular architecture is thus preferred.These help make LMs adapt to humans, but not vice verse.We discuss challenges and opportunities, in particular, data & feedback, methodology, evaluation, interpretability, constraints and intelligence.