APP Users: If unable to download, please re-install our APP.
Only logged in User can create notes
Only logged in User can create notes

General Studies 3 >> Science & Technology

audio may take few seconds to load

LaMDA

LaMDA

 

1.Context

Blake Lemoine, a U.S. military veteran engaged by Google to test for bias or hate speech in the Language Model for Dialogue Applications (LaMDA), claims that the updated software is now sentient.
The neural network with deep learning capacity has the consciousness of a child of seven or eight years old.
The consent form for the software must to obtained before experiments are run on it.
 

2.Is AI technology here?

  • AI technology appears futuristic. Facebook's facial recognition software identifies faces in the photos we post.
  • The voice recognition software that translates commands we bark at Alexa.
  • The Google translate apps are all examples of AI Tech already around us.
  • AI tech today aims to satisfy the Turing test to qualify as intelligent.
  • Turing was the designer and builder of the world's first computer, ENIGMA was used to break the German codes during the Second World War. 
  • To test if a machine thinks, Turing devised a practical solution.
  • Place a computer in a closed room and a human in another. If an interrogator interacting with the machine and the human cannot discriminate between them, then Turing said that the computer should be construed as intelligent.
  • The reverse Turing test, CAPTCHA, limits technology access to humans and keeps the bots at bay.

3.Which were the first chatbots to be devised?

  • As electronics improved and first-generation computers came about Joseph Weizenbaum of the MIT Artificial Intelligence Laboratory built ELIZA, a computer programme with which users could chat.
  • ALICE (Artificial Linguistic Internet Computer Entity), another early chatbot developed by Richard Wallace was capable of simulating human interaction.
  • In the 1930s, linguist George Kingsley Zipf analysed the typical human speech and found that most utterances began with 2,000 words.
  • Using this information, Wallace theorised that the bulk of commonplace chitchat in everyday interaction was limited.
  • He found that just about 40, 000 responses were enough to respond to 95 per cent of what people chatted about.
  • With assistance from about 500 volunteers, Wallace continuously improved ALICE's responses repertoire by analysing user chats, making the fake conversions look real.
  • The software won the Loebner Prize as "the most human-computer" at the Turing Test contests in 2000, 2001 and 2004.


4.What is a neural network?

  • It is an AI tech that attempts to mimic the web of neurons in the brain to learn and behave like humans.
  • Early efforts in building neural networks targeted image recognition.
  • The artificial neural network (ANN) needs to be trained like a dog before being commanded.
  • For example, during the image recognition training, thousands of specific cat images are broken down to pixels and fed into the ANN.
  • Using complex algorithms, the ANN's mathematical system extracts particular characteristics like the line that curves from right to left at a certain angle, edges or several lines that merge to form a larger shape from each cat image.
  • The software learns to recognise the key patterns that delineate what a general cat looks like from these parameters.
  • Early machine learning software needed human assistance.
  • The training images had to be labelled as 'cats', 'dogs' and so on by humans before being fed into the system.
  • In contrast, access to big data and a powerful processor is enough for the emerging deep learning software.
  • The App learns by itself, unsupervised by humans, by sorting and sifting through the massive data and finding the hidden patterns.


5.What is LaMDA?

  • It is short for 'Language Model for Dialogue Applications', Google's modern conversational agent enabled with a neural network capable of deep learning. 
  • Instead of images of cats and dogs, the algorithm is trained using 1.56 trillion words of public dialogue data and web text on diverse topics.
  • The neural network built on Google's open-source neural network, Transformer, extracted more than 137 billion parameters from this massive database of language data.
  • The chatbot is not yet public, but users are permitted to interact with it.
  • Google claims that LaMDA can make sense of nuanced conversation and engage in a fluid and natural conversation.
  • The LaMAD 0.1 was unveiled at Google's annual developer conference in May 2021 and LaMDA 0.2 in 2022.

 

6.How is LaMDA different from other chatbots?

  • Chatbots like "Ask Disha" of the Indian Railway Catering and Tourism Corporation Limited (IRCTC) are routinely used for customer engagement.
  • The repertoire of topics and chat responses is narrow.
  • The dialogue is predefined and often goal-directed. For instance, try chatting about the weather with Ask Disha or about the Ukrainian crisis with the Amazon chat app.
  • LaMDA is Google's answer to the quest for developing a non-goal-directed chatbot that dialogues on various subjects.
  • The chatbot would respond the way a family might when they chat over the dinner table.
  • Topics meandering from the taste of the food to price rise to bemoaning war in Ukraine.
  • Such advanced conversational agents could revolutionise customer interaction and help AI-enabled internet search, Google hopes.
 

7.AI Intelligent

  • The Turing test is a powerful motivator for developing practical AI tools.
  • Philosopher John Searle uses the ' Chinese Room Argument to demonstrate that passing the Turing test is inadequate to qualify as intelligent.
  • Once I used Google Translate to read Whatsapp messages in French from a conference organiser in France and turn replied to her in French. 
  • For some time, she was fooled into thinking that I could speak French.
  • I would have passed the ' Turing test', but no same person would claim that I know French.
  • This is an example of the Chinese room experiment.
  • The imitation game goes only so far. Further scholars point out that AI tech uses a false analogy of learning.
  • A baby learns a language from close interaction with caregivers and not by ploughing through a massive amount of language data.
  • Moreover, whether intelligence is the same as sentience is a moot question.
  • The seemingly human-like conversational agents rely on pattern recognition, not empathy, wit, candour or intent.
 

8.Is the technology dangerous?

  • The challenges of AI metamorphosing into sentient are far in the future.
  • The unethical AI perpetuating historical bias and echoing hate speech is the real danger to watch for.
  • Imagine an AI software trained with past data to select the most suitable candidates from applicants for a supervisory role.
  • Women and marginalised communities hardly would have held such positions in the past, not because they were unqualified, but because they were discriminated against.
  • The machine has no bias, AI software learning from historical data could inadvertently perpetuate discrimination.
 
 
 


 Source: The Hindu



Share to Social