Chloe Baul//October 9, 2023//
A recent National Bureau of Economic Research study revealed that companies equipped with generative artificial intelligence (AI) tools demonstrated a 14% increase in productivity. However, data security concerns have prompted companies like Samsung and Apple to prohibit the use of Artificial Intelligence (AI) tools by their employees. Micron Technology, a semiconductor manufacturing company with a 50-year history, is taking a cautious yet practical approach to AI, recognizing that the very AI technology empowering them can also present risks.
For this reason, Micron is taking proactive measures to continue to leverage the advantages of this pervasive technology. In a recent presentation during Boise Entrepreneur Week, “Cybersecurity in the Era of Generative AI,” Micron’s Nitin Negi, senior manager of cybersecurity, and Chris Harlow, technical manager of IT security, shared valuable insights into both the opportunities and risks that generative AI brings to their operations.
According to Harlow, the ubiquity of AI, even in children’s schoolwork, highlights the pervasive nature of this technology. From aiding with code development to assisting in creating resumes, AI is increasingly becoming a staple in various facets of life and work.
“It’s transforming the way we think about normal topics at school and work,” he added.
However, this widespread adoption of AI comes with its share of security risks. According to Negi, the consequences that are associated with these models can lead to really disruptive situations, and change the ecosystem balance within the organization.
“It’s like what happened in the cloud journey in the last five or 10 years, and similarly, we’re now dealing with AI, by adopting a more risk-driven approach,” Negi stated. “We need to be very sure of how we are leveraging and using these AI models.”
In addition to the ever-expanding applications of AI in daily life, Negi and Harlow delved into the specific risks and challenges that come with embracing generative AI. They highlighted several critical concerns that organizations must address as they integrate AI into their operations:
One significant risk is data manipulation. AI relies heavily on the data it’s trained on, and any manipulation or bias in this data can result in skewed and potentially damaging outcomes.
“There’s a lot in terms of data manipulation that we have to be concerned about. Large language models and neural networks could drive the risk that comes with the biases of the data output,” Negi added.
One example of data manipulation posed by Harlow included a question he asked two different AI models a simple question: ‘Which eggs are better, cow or chicken?’
“The first AI suggested that both cow and chicken eggs are good sources of protein and nutrients, while the second AI claimed that there are no such things as cow eggs,” he explained, highlighting the potential inconsistencies in AI-generated content, even within open AI models.
Additionally, with the vast amount of information being processed and generated by AI systems, the potential for privacy breaches and identity threats becomes more significant. This includes not only personal data but also corporate information, which, if mishandled, could lead to reputational damage and security threats.
In addition to data manipulation and privacy concerns, AI-generated malware and phishing campaigns are on the rise, according to Negi.
“We’re seeing an uptick or trend—it’s becoming more and more difficult for traditional security controls to come around and take the right preventative actions, or even monitor them right from the beginning,” he added.
As AI continues to permeate various aspects of society and business, the importance of balancing its potential benefits with security measures cannot be overstated, added Harlow.
“As Albert Einstein once said, ‘The measure of intelligence is the ability to change.’ Today, we find ourselves in a constant state of change, driven by the transformative power of AI,” he concluded. “The question before us is how we choose to navigate this change – how we secure AI, safeguard our businesses, and use this technology for the greater good.”