2024, as you probably know, is an election year in 40 countries. Campaigning begins this month in Taiwan and ends in the United States in November. In fact, Bloomberg Economics estimates that 41% of the world’s population will elect new leaders this year.
We also see that generative AI is showing promise in a variety of areas, including the spread of disinformation and deepfakes that could skew the outcome of many leadership contests in the biggest election year in history. I am.
Already under scrutiny in 2023, 2024 is the year Big Tech companies need to prove they can be trusted by governments, the public, and businesses. It’s no surprise, then, that IBM is mobilizing its market positioning to address the issue of responsible AI, with the IBM Institute for Business Value (IBV) declaring: ”
Will a lack of trust constrain companies from spending on AI?
According to IBM’s IBV Report on Responsible AI and Ethics, “Globally, less than 60% of executives believe their companies are ready for AI regulations, and 69% believe their companies are ready for AI regulations. We expect the introduction of AI to result in regulatory fines.”
In the face of this uncertainty and risk, CEOs are pumping the brakes. More than half (56%) are holding off on major investments until there is clarity on AI standards and regulations. Overall, 72% of executives say their organizations would forego the benefits of generative AI due to ethical concerns. ”
Heather Gentile, watsonx.governance product management director at IBM Data and AI Software, commented that since the launch of ChatGPT, organizations have been restricting their policies and procedures around AI adoption due to ethical concerns. To develop responsible AI, organizations need metrics and controls in place.
However, projects are typically initiated in siled areas of activity, making risk management difficult. Generative AI brings new risks such as hallucinations, protecting PII data, and eliminating profanity.
Organizations need to strengthen their governance strategies for generative AI
IBM first announced the Watson platform in 2010, but market acceptance of the technology was slow at that time. Last year, IBM rebuilt Watson capabilities by introducing the watsonx platform, watsonx.ai and watsonx.data (enabling the platform to access data from any cloud).
The company recently added watsonx.governance to this watsonx portfolio. This is an additional feature that evaluates requests and models for use cases. It is designed to model facts, assess how models change over time, check balance, drift and risk, and ensure regulatory compliance. Watsonx.governance is built into the watsonx.ai workflow and monitors the AI lifecycle from use case requests to model testing to continuous model evaluation. As soon as an alert is received from watsonx.governance, it is sent to Prompt Engineering to automatically capture the governance process. Watsonx.governance comes with a model inventory for matching risk management and compliance requirements and controls in real time.
Considering that understanding the flow of data from source systems to end use is perhaps as important, if not more important, than understanding the LLM itself, IBM has acquired Manta to improve its watsonx.governance has been further strengthened. This provides IBM with a way to manage the data that feeds into the model. . Manta supports over 50 scanners and provides catalog integration (pushing data to many catalogs). You can perform technical lineage of the data in terms of things like rule syntax, as well as historical lineage of how the data in your pipeline has changed over time.
It is clear that successful governance requires not only tools but also broader organizational commitment. So it’s clear that IBM Data and AI Software is working closely with IBM Consulting to help clients understand the maturity of their generative AI deployments.
To be effective, a variety of stakeholders must be involved in developing an organization’s responsible AI approach. To ensure accountability for data scientists, operations researchers, and applications, the entire C-suite must be engaged to promote best practices to various stakeholders. Developers and machine learning engineers work in line with both organizational ethics and external law.
my view
The ability to regulate and control the use of generative AI will be a major technology trend in 2024. Especially as governments, law enforcement, and major social media platforms focus on their fraudulent use to sway election results. Businesses are understandably already concerned about the liabilities that the use of AI may leave them vulnerable to, while at the same time trying to balance the competitive opportunities the technology offers.
IBM is interesting in this context because it has the advantage of owning both AI technology and a large professional services division, setting it apart from competitors such as Microsoft and Google on the one hand, and Accenture and Deloitte on the other. I’m in a position. It also has a reputation for conservative prudence, which is a boon for many enterprise teams looking to navigate the somewhat frenzied atmosphere surrounding generative AI adoption.
So the combination of Watsonx’s portfolio and IBM Consulting’s expertise could breathe new life into the old corporate adage that no one gets fired (or sued or fined) for buying IBM.