Why ethical and responsible AI is our most important investment

We’re fast approaching the one-year anniversary of generative AI with the launch of ChatGPT in November 2022, and businesses continue to strategize on how to properly implement the technology into their business to help achieve more productivity, cost savings and operational efficiencies .

With the rise of generative AI, people are using the technology to help identify the fastest way to close a deal, provide quick answers to repetitive questions and tasks, better understand customer behavior, and create highly personalized shopping experiences customer experiences.

Businesses across all industries are racing to use generative AI and AI assistants to improve their operations and drive customer loyalty by delivering the best possible experience. However, as the use of AI assistants has increased, it is clear that companies need to adopt a data-first approach to ensure that these assistants are not only effective, but also safe to use in the business world.

We caught up with Claire Cheng, Sr. Director of AI Engineering at Salesforce, the world’s #1 AI CRM that enables companies of all sizes and industries to connect with their customers through the power of data, AI, CRM and trust. Claire shares why investing in ethical and responsible AI is the most important investment a business can make to stay ahead.

Gary Drenick: Tell me why ethical and responsible AI should be a top priority when building AI assistants.

Claire Cheng: As brands increasingly embrace AI to increase efficiency and meet customer expectations, nearly three-quarters of customers are concerned about the unethical use of AI. A recent study by Prosper Insights & Analytics found that 43.6% of people who use tools like ChatGPT use it for research. These AI assistants, which are built from large language models (LLMs), are trained on trillions of documents that help generate the information people are looking for. These vast amounts of data inevitably contain biases that can be reflected in the model output if organizations do not adopt a data-first approach when using AI.

Organizations need to build AI with trust at the center of every AI effort. At Salesforce, we prioritize ethical and responsible AI, helping our customers leverage their trusted proprietary data in the Data Cloud to power their AI outcomes, and fostering a data-first approach that is inherently compliant with ethical practices in enterprise-level applications. This approach ensures that our customers can trust that their AI-driven insights and actions are deeply rooted in respect for customer privacy and data governance, paving the way for enterprise AI solutions that are not only powerful, but also principled and reliable.

Drenik: What are the biggest ethical hurdles facing the data community as we lean into this new evolution of AI-powered assistants?

Cheng: The biggest ethical hurdle facing the data community is educating both producers and consumers of data about the limitations of AI and the biases algorithms can introduce when generating results derived from the data used for training . That’s why Einstein’s trust layer and other technologies are so important in helping companies minimize worry and roadblocks by being able to trust AI-generated content and predictions.

There is a skills gap that needs to be filled, as more than 60% of skilled professionals believe they lack the necessary skills to use AI effectively and safely. As more complex models are developed, these biases can be reinforced and amplified if left unchecked or if humans are unable to identify these limitations early to refine AI models before launch.

Drenik: How can businesses best respond to ensure their data is reliable, accurate and secure?

Cheng: AI is only as good as the data that powers it, which is why businesses need to be intentional about their data strategy. That’s why Salesforce’s Data Cloud has been so widely adopted—it’s the key to helping businesses better understand and unlock customer data and deliver actionable insights in real-time and at scale.

Another important step is how they refine AI models and how those models access data and collect data from trusted sources. For example, to improve the quality of forecasts and leads, Salesforce uses the dual approach of being grounded in customer data and continuous iterative improvement with customer feedback.

Dynamic grounding – the process of extracting relevant, factual, up-to-date data and limiting LLMs for generations – remains key to delivering highly relevant and accurate results. For example, by basing a generative AI model with customer data from trusted sources, such as their knowledge articles, the answers generated are more accurate and relevant.

Customer feedback also helps validate and improve model accuracy. Having customers provide explicit and implicit feedback on how they use or change their AI outputs greatly helps organizations refine AI assistants and improve the bottom line through feedback-based learning.

Drenik: Can generative AI be used to identify biases in AI assistants and data? Or will humans always be needed to get it right and ensure trust?

Cheng: 73% of IT leaders have concerns about the potential for bias in generative AI. When testing AI models for bias, it is important that these tests are performed in simulated environments. At Salesforce, for example, we start with predictive AI that analyzes historical data patterns to predict future outcomes. We mitigate bias by testing these algorithms over time and developing quantitative measures of bias and fairness. This helps detect and correct bias issues before deployment.

Addressing bias in generative AI remains a more complex problem, requiring new evaluation methods such as competitive testing. With adversarial testing, our team works to intentionally push the boundaries of generative AI models and applications to understand where the system may be weak or susceptible to bias, toxicity, or inaccuracy.

There may come a time when humans will not be as necessary to ensure trust, but in the early stages of generative AI development and building trust with the end user, it is important that humans remain engaged to help recognize and reject adversarial inputs so that the AI ​​model does not produce toxic, biased, dangerous or inaccurate results.

Drenik: What are the privacy implications for businesses using generative AI, in addition to trust, and how can they ensure that appropriate safeguards are used to ensure that sensitive customer data is not accidentally leaked, for example?

Cheng: Privacy and trust are the most important drivers for businesses using generative AI. In fact, 79% of customers say they are increasingly protecting their personal data. That’s why safety is one of Salesforce’s trusted AI principles.

It is critical that organizations make every effort to protect the privacy of any personally identifiable information (PII) present in data used for training and to create safeguards to prevent further harm. This can be done by force-publishing code to a sandbox instead of automatically sending it to production to analyze what results are generated before external use.

Organizations must also respect the provenance of data and ensure consent is given for data use, especially when pulling from open source and user-provided sources. A recent survey by Prosper Insights & Analytics found that 48.1% of adults have denied permission to mobile apps to track their data, so businesses will need to first ensure permission is granted before using any sets of data in the development of AI assistants.

Drenik: How can companies ensure that IT teams using generative AI solutions are properly trained with the skills to detect data anomalies and safely build AI assistants?

Cheng: A recent Salesforce survey found that 66% of senior IT leaders believe employees lack the skills to successfully use generative AI. However, almost all respondents (99%) believe that organizations need to take matters into their own hands to successfully use technology.

To begin with, organizations should adopt a set of guiding principles for employees to follow when interacting with AI assistants and analyzing the data used to train them. For example, Salesforce was the first company to publish principles for developing generative AI. Once these principles are in place, investing in upskilling employees to better understand what reliable data sources look like and how to keep sensitive data secure are ideal places for organizations to start training employees.

For example, at Salesforce, both employees and our end users can visit Trailhead to start building their AI skill sets as generative AI and AI assistants become more pervasive in everyday work.

Drenik: Thanks, Claire, for sharing your insights on AI assistants and why ethical and responsible AI is the most important investment organizations can make as this technology becomes more widely used in business.

Take a look my website.

Leave a Comment

Your email address will not be published. Required fields are marked *