The use and improvement of Natural-Language-Generation (NLG) is a recent development that is progressing at a rapid pace. Its benefits range from the easy deployment of auxiliary automation tools for simple repetitive tasks to fully functional advisory bots that can offer help with complex problems and meaningful solutions in various areas. With fully integrated autonomous systems, the question of errors and liability becomes a critical area of concern. While various ways to mitigate and minimize errors are in place and are being improved upon by utilizing different error testing datasets, this does not preclude significant flaws in the generated outputs.
From a legal perspective it must be determined who is responsible for undesired outcomes from NLG-algorithms: Does the manufacturer of the code bear the ultimate responsibility or is it the operator that did not take reasonable measures to minimize the risk of inaccurate or unwanted output? The answer to this question becomes even more complex with third parties interacting with a NLG-algorithm which may alter the outcomes. While traditional tort theory links liability to the possibility of control, NLG may be an application that ignores this notion since NLG-algorithms are not designed to be controlled by a human operator.