Connect with us

Hi, what are you looking for?

News

Growing Use of Generative AI in the Workplace Raises Concerns Over Data Governance and Security

Growing Use of Generative AI in the Workplace Raises Concerns Over Data Governance and Security
Growing Use of Generative AI in the Workplace Raises Concerns Over Data Governance and Security

Since the release of ChatGPT by OpenAI in November 2022, generative artificial intelligence (GenAI) has rapidly gained traction. This technology allows users to generate context-specific text responses to a range of prompts, which has proven valuable in various tasks such as drafting emails and enhancing chatbot capabilities.

The increased interest in GenAI is evident, with a recent report by Microsoft indicating that 75% of knowledge workers now use some form of GenAI in their daily work.

The Microsoft survey, which covered over 31,000 professional employees, revealed that nearly half of those using GenAI started within the past six months. Interestingly, the adoption of these tools is not limited to younger, tech-savvy users; people of all ages are embracing this technology.

Small businesses, in particular, are increasingly incorporating GenAI into their operations. This widespread use underscores the growing importance of GenAI in modern work environments.

However, alongside the adoption of GenAI tools comes the challenge of managing “digital debt.” With an ever-growing volume of data and digital communications, many workers feel overwhelmed, particularly with the sheer volume of emails they must process daily.

The Microsoft report highlights that 85% of emails are read in less than 15 seconds, showcasing how time-consuming and repetitive tasks are driving employees to seek out GenAI tools to streamline their workloads. The stress of handling this digital overload has led to a significant number of professionals reporting feelings of burnout.

One of the main issues with using external GenAI tools, such as ChatGPT, is their lack of corporate oversight. Since these tools are often open-source and free, they fall outside the direct control of an organization’s IT department.

Sarah Armstrong-Smith from Microsoft points out that using free GenAI tools may expose sensitive company data to external entities, as the data entered could be used to train the AI models behind these tools. The question arises: is the data being protected, or is it becoming a commodity for others?

Growing Use of Generative AI in the Workplace Raises Concerns Over Data Governance and Security

Growing Use of Generative AI in the Workplace Raises Concerns Over Data Governance and Security

The real concern with these external tools is the potential for poor data governance. These tools often operate in a “shadow IT” environment, where employees use unapproved software outside the organization’s standard systems. This can lead to data leakage, where corporate information is unknowingly shared with third-party platforms.

Furthermore, when employees input unverified data into these tools, they risk introducing incorrect or misleading information into their organization’s knowledge base, which could undermine decision-making processes.

Data governance issues also include the risk of “poisoning” corporate datasets. When unverified data from external GenAI tools is incorporated into company systems, it can negatively impact the accuracy of internal models and algorithms.

For instance, if employees unknowingly use inaccurate data generated by an external AI tool, it could result in flawed outputs within a company’s AI systems, leading to potentially serious business consequences. This was highlighted in an incident where a lawyer used ChatGPT to draft legal documents, only for the AI to fabricate fake case references.

The solution for many organizations is to invest in internal GenAI tools that are designed to work within the company’s network. These tools provide a higher level of data security and governance, ensuring that sensitive information remains protected throughout the development and usage of the AI.

By keeping AI tools within the organization’s IT infrastructure, companies can mitigate the risks of data leakage and maintain control over their information. However, it is still crucial for businesses to implement robust data governance practices to ensure the security of both the back-end and front-end systems.

For organizations to successfully integrate GenAI tools, it is essential to raise awareness among employees about the availability and benefits of internal solutions. Many workers may be unaware that their company offers tools that can replace external GenAI platforms, so education and training programs can help promote these in-house tools.

By guiding employees toward using corporate GenAI solutions, companies can reduce the security risks associated with external applications. As GenAI continues to evolve, it will become an integral part of workplace technology, but businesses must remain vigilant about protecting their data.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Tech

Threads is experimenting with a new feature that allows users to set a 24-hour timer on their posts. After this period, the post and...

News

AU10TIX, an Israeli company that verifies IDs for clients like TikTok, X, and Uber, accidentally left important admin credentials exposed for over a year....

News

Charles Hoskinson, the founder of Cardano, has voiced dissatisfaction with recent changes to Tron’s native stablecoin, USDD. He reacted to a report indicating that...

Tech

A team of international researchers has developed Live2Diff, an AI system that transforms live video streams into stylized content in near real-time. Named for...