Concerned about AI? Try searching for it on Google!

HomeTechnology

Concerned about AI? Try searching for it on Google!

According to a research report by Public First, a UK-based think tank, which was commissioned by Google, there are more benefits to be derived from t

2023 WASSCE: Subject results of students from 235 schools withheld over AI-generated answers
Apple banks on AI to boost sales of new iPhone 16
2023 WASSCE – NAGRAT demands the alleged AI cheating should be investigated

According to a research report by Public First, a UK-based think tank, which was commissioned by Google, there are more benefits to be derived from the application of Artificial Intelligence (AI) to work processes, which outweigh the possible downside risks.

The study, which was primarily on the British economy, stressed that AI could enhance almost two-thirds of British jobs, estimating further that 31 per cent of jobs would be insulated from AI. Significantly, the research report, which was published on July 25, says AI would radically transform 61 per cent of jobs.

Commenting on the research report, Debbie Weinstein, the Managing Director of Google UK, said: “Fewer than 50 per cent of people are actually taking advantage of these tools in their working life on a day-to-day basis.

The uptake of these tools is very low, and I think the only way we’re going to unlock the potential of what AI can do is actually by getting people to use them, and to feel confident and capable about them.”

In effect, what the report portrays is that instead of worrying about job losses arising from AI applications, we need to focus on how it could make us work smarter and in a faster way.

Well, I was attracted to this story because of my deep interest in AI-related activities and the fact that the report sheds light on an important debate in the AI market.

In fact, if you are an ardent reader of this column, you would have come across several articles on various aspects of AI that l has written here. AI is a dynamic, quite fascinating and evolving area of intelligence that has confused many.

AI has confused many because of the several grey areas that are yet to be fully addressed. Even those who helped birth AI are not sure of some of the attendant underlying ethical issues that could spring up.

In the May 6, 2023, edition of this column, for example, under the headline: AI Ethics: why does it matter? I told the story of the resignation of Geoffrey Hinton from Google, a man described as the godfather of AI.

In fact, announcing the resignation, this is how one report captured the headline: Godfather of AI, Geoffrey Hinton, quits Google and warns over dangers of misinformation. Hinton is often touted as the godfather of AI because of his pioneering work on neural networks.

 It is common knowledge that Hinton, together with two of his students at the University of Toronto, built a neural network in 2012, which has pioneered current systems and applications, such as ChatGPT.
 
Now, what led to his exit from Google is the most intriguing part of AI development. Both The Guardian newspaper of the United Kingdom and the New York Times of the United States of America stressed that Hinton was leaving due to concerns over the flood of misinformation, “the possibility for AI to upend the job market and the ‘existential risk’ posed by the creation of a true digital intelligence”. Mark out the job market!
 
In an interview with the New York Times, Hinton stated that until 2022, he was confident that Google had been a “proper steward” of the technology he had pioneered. However, his confidence dipped once Microsoft started incorporating a Chabot into its Bing search engine, and Google became concerned about the risk to its search business.

In another interview with the BBC, Hinton stressed on some of the dangers of AI chatbots, claiming that it was “quite scary”, and bemoaned how they could be exploited by “bad actors” in space.

In fact, AI, he said, could become more intelligent than humans and that had the potential of distorting the future of work.

“It’s able to produce lots of text automatically. So, you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that,” Hinton said in the referenced interview.  

“I’ve concluded that the kind of intelligence we’re developing is very different from the intelligence we have [as humans],” he said, adding: “So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”
 
That aside, I have my concerns too. Over the years, I have also become concerned about the ethics of AI and Machine Learning (ML). In the April 1, 2023, edition of this column, for example, I explained how the quantum leap in the application of AI models had confused even the big adopters of the technology.

This was based on an open letter signed by major AI players, including Elon Musk. In the letter, the authors opined: “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict or reliably control…Powerful AI systems should be developed only when we are confident that their effects will be positive and their risks will be manageable.”

The UK think tank, Public First, provides some useful insights too, saying just a handful of jobs were likely to be fully “phased out” by the introduction of AI, estimating that even the most affected sector, financial and insurance, could lose just four per cent of jobs in Britain, with 83 per cent “enhanced” instead. In fact, similar research reports on AI draw parallel conclusions.

Weinstein also stressed: “Part of what’s tricky about us talking about it now is that we actually don’t know exactly what’s going to transpire. What we do know is the first step is going to be sitting down and really understanding the use cases. 

If it’s school administrators versus people in the classroom, what are the particular tasks we actually want to get after for these folks?

“If you are a schoolteacher, some of it might be a simple email with ideas about how to use Gemini in lesson planning, some of it might be formal classroom training and some of it one-on-one coaching.

Across 1,200 people, there will be a lot of different pilots, each group with around 100 people.” AI has great potential benefits. However, its development and application must be backed by strong ethical principles.