Tax issues on stake sales and investment into managers: structuring, pitfalls and steps to take now
Supporting Private Capital Managers
Tailored solutions for the private capital industry.
Spotlight case study
/Passle/MediaLibrary/Images/2024-04-04-09-27-03-577-660e72670a4f4df32058b475.png)
3 minute read
AI agents are computer programs that can take actions on their own, to perform complicated tasks with minimal human oversight. They use large language models (LLMs) to make decisions and solve problems in a way that traditional, rule-based programming cannot.
By combining decision-making skills with the ability to access databases, browse the internet, and perform complex actions such as intricate financial trades, AI agents represent a significant step towards autonomous real-world problem solving.
AI agents promise a shift in how we understand technology’s role in our daily lives. They blur the lines traditionally drawn between user and tool, developer and machine, raising the issue of whether our existing legal frameworks are agile enough to adapt. By embracing some safeguards and guidelines, organisations and regulators can tap into the enormous benefits of AI agents without relinquishing accountability.
AI agents function with a significant increase in autonomy found in previous iterations of genAI. Unlike earlier software automations that simply execute a predefined set of instructions, AI agents can adapt swiftly to new inputs and evolving circumstances. They stand out for their ability to handle open-ended tasks with a level of independence once relegated to human decision-makers.
The ability to take different routes to reach a goal, even to operate in unpredictable ways, makes them ideal for tasks that demand both speed and flexibility. Conversely, this adaptability and unpredictability also introduces a new level of risk. As the paths they choose are less constrained, AI agents can produce unexpected or undesirable outcomes.
This deeper level of independence prompts pressing questions about liability.
Say, for example, an AI agent is instructed to maximise profits. In the process, it executes a risky series of trades that destabilises a market. Who shoulders the blame?
The question becomes thornier: As the work of AI agents advances from lower levels of output – where human users still perform substantial oversight – to the highest levels – where agents act with minimal intervention, the gradient of liability follows.
We have already seen the groundwork for this in the United Kingdom’s approach to autonomous vehicles, where manufacturers assume increasingly greater responsibility once the car can effectively drive itself. As we move across thresholds in artificial intelligence, with agents “levelling up” and humans relaxing hands-on control, the onus for harm could lean more heavily on those who design or deploy the AI’s core functionality.
Amid these evolving liability scenarios, well-defined oversight protocols have the potential to help ensure safety and accountability, especially for high-stakes tasks like finance, law, or healthcare.
Humans can maintain authority over major actions in several ways.
AI agents represent a significant transformation in generative AI technology, providing unparalleled autonomy and efficiency in managing complex tasks. However, this advancement requires a thorough examination of our oversight mechanisms and understanding of our liability frameworks. By balancing innovation with responsibility, we can remain vigilant to the risks of AI agents while fully realising the benefits.
Stay up to date with our latest insights, events and updates – direct to your inbox.
Browse our people by name, team or area of focus to find the expert that you need.