AI Unstoppable - Misuse Posing Danger

We all know that AI has come as boon but at time it is becoming bane. Mainly because we always think about beneficial and positive aspects of AI which is reduced the human cost to a great extent. Before going in to the details of misuse of AI which is having dangers consequences, we must remember what the millioner Elon musk who has owned the twitter, according to him in the next 5 years AI bypass the human intelligence in real sense it will be super intelligent. He has further emphasis that AI is more dangers than atomic bomb. PM Modi has also said that AI is being misused and it is necessary for us to be alert.

It is become every day affair to read in the newspaper about cyber security and digital arrest even the highly intelligent and intellectual people are becoming the victim of scams mainly who are active in telegram WhatsApp and Facebook and Instagram etc. the knowledge of internet banking, skype, google meet, zoom among the people is giving rise of getting coat by the cyber criminals. There is no word like digital arrest but still people are being louder for the greed of high return in the share market. They get attract by the message on social media in their accounts. With the help of AI, the cyber criminals get to know the people in and out. Like racy, the advertisement comes in the form of free investment skills zero loss schemes for high returns in most of the cases the threads come for arresting a person informing that their parcel with contraband has seized and warrant are issued and the criminal pose himself as a police officer. There are vogues stock trading apps website links, fake Dmate accounts and sometime the persons are loured the member of a group which is created with hundreds of fake members. Even the DP are fake in the group. The people are losing the money in their bank account their mobile phones are hacked and once a person get to know your aadhar number and phone number then it became very easy for scammers to have access.

Deepfake danger to privacy and reputation

The technological advancement in the field of AI is making tremendous progress that a one-minute video or around 40 pictures can be faked as real and this feature of AI is coming in the form of deep fake. Recently the actress Rahmikamandana complaint about the deep fake of her face with some other lady. Actress Kajol also complaint about such video. It is very difficult to make out as to whether a particular video is real or fake. Deep fakes also weaponised to smear some of the important women leaders of other countries it has been reported that information minister of Pakistan MS Ajma Bukhari became the victim of her counterfeit image since a sexualise deep fake video was published, she felt sattured when it came to her knowledge. She is one among the prominent leader of the Punjab province as matter-of-fact Deep fakes manipulate genuine audio, photo or videos of the people into false likenesses. It become difficult for the popular leaders to convince that the video of her face was super imposed on the sexualise body of an Indian actor. Shamelessness has no limits for anti-social elements because according to Ajma Bukhari the photos of her husband and son were manipulated to imply as if she is in public with other man. Why we are talking about Pakistan because the media literacy is poor and taking this advantage, deep fakes are being weaponised to smear woman in the public sphere with sexual contents deeply damaging their reputation in a country with conservative mores.

In another case, the deep fake was used in different manner when ex-prime minister Imran khan was in prison but his team used an AI tool to generate speeches in his voice shared on social media which helped him to campaign behind bars.

The harmful effects of deep fake for man in politics comes when they are typically criticising for their ideology, corruption, and status. For the woman it is more dangerous for tearing down their image. Agence France Presee commonly known as AFP has taken the opinion of US based AI expert henry Ajder who said, “when they are accused, it almost always revolves around their sex lives, their personal lives, whether they are good mums, whether they are good wives. He further stressed that deep fakes are very harmful weapon.

There are instances that people are getting blackmailed and criminals are deep faking the voice and picture of the near relative making a call seeking money on the pretext of an emergency. The only safeguard is AI can not copy the human emotions another danger AI is area of privacy today if you are on the facebooklinkdin twitter or whatsapp then it is very easy with the help of AI to monitor your online and offline activity because the face detection and algorithm can easily identify the moments.

In India, the condition is even worse because the growth and advancement in the field of AI is tremendous and the deep fake video of top management are circulated over social media giving financial advice to the people the reserve bank of India has cautioned the public about the fake videos of their governor which are circulated on social media regarding some fake investment schemes. These videos attempt to advise people to invest their money in such schemes through the use of technological tools. No body believe that such negative harmful effects will come with introduction of AI.

Fake experts in trading scams

During last few years there is boom in stock market mainly because there are trading applications which helps the investors in getting the knowledge of share market and there are large number of advertisements publishing their applications on whatsapp, telegram, facebook and Instagram. These advertisements claim easy earning in the share market. These fake applications are the tool for the cyber criminals. It is like digital arrest that trading scams are spread with the help of AI. The scammers are using fake profiles and their modes operandee is unique to begin with they lure the people for online investment and part time work from home assignments. These advertisements come on the social networking sites giving fake stock trading apps and fake websites links with the Esurance of high returns. The fake audio videos are generated with investment advisor to open fake demate account. The fake groups are created by cyber criminal as sole admin after sending the link of social media the people are encourage to join a group which consist of 200 to 500 group members heaving their attractive DP. The fake discussion of advisor with the members is shown by way of chatting indicating screenshots of high profits in shares. The people are duped by joining these groups and investing money.

Now the question come why and how? the most intelligent people come in the clatches of the scammers. First of all, they get influenced for learning investment skills for free of cost with the guarantee of zero loss schemes and 100% returns. These scammers also advertise that they are registered with SEBI and RBI as advisor.

Let us know about the incidence of cybercrime victims who were holding very high positions but still became the victim. A woman from Bombay who is an IAS officer trapped in the high return scam and lost her 1crore rupees. Scammer claimed himself as an international expert. One another incident happened with the renowned IG, a IPS officer DK Pandey. He earned 381 crores through online trading which is extorted by the scammer.  There are large number of such cases where the educated people lost the money by the way of these cyber scams. The wife of one industrialist of kodarpuram Tamil Nadu downloaded online share trading app and invested over 10 crore rupee the scammer showed the profit on virtual account on the app though the same was not reflected on her actual account she realised quite late that she lost the money.

Cyber-attack targeting privacy and reputations

Over a period of time, cyber criminals have expanded their fraud tactics beyond imaginations they have now started manipulating narrative deploying disinformation for the purpose destabilise organisation and tarnish their reputations. In the recent past a leading insurance firm became victim of data breach it was not just aa case of stolen data but a calculated attempt to destroy the career of the CEO of the company. In real practice the hacker with the name “XenZen” did not just breach the insurance company’s system with fabricated E mail but also tried to convince the world that the CEO has willing handed over the sensitive data. This acquisition sparked handline which was not true. Let us go deep into this data breach story. The hacker “XenZen” posted an offer to sell 7TB customer data stolen from the insurance firm which involved the personal information including the name, addresses and health record over 31 million people for grabs on the dark web the breach itself was very real on massive scale and the hacker “XenZen” claimed that the CEO had leak the data. Later on, it was revealed that the CEO involvement was fabricated and “XenZen” has doctored an E mail using the trick of altering the HTML code with the inspect aliment function. It was very easy way to look as if CEO had sent sensitive information.

The hacker found the credentials on the dark web as separate credential breach to exploit the vulnerability in the company system. It became the case of exploiting technical flow because XenZen had stolen the credentials without insider help to have the access of the companies database they exploited the insecure direct object reference (IDR) vulnerability in the companies API (Application Programming Interface). Which is a type of security flow that allows unauthorized users to access sensitive data simply by manipulating URL(Uniform Resource Locator). In this case of insurance company this flow gave the hacker XenZen access to 7 TB customer information allowing him to steal the data without raising any red flags. XenZen never intended to prove insider collusion but their basic objective was not to destroy the reputation of the person responsible for protecting the data.

AI advancement – unanticipatedmisuse

In today scenario the data is the most important commodity which can be used for beneficial purposes as well as can be misused by cyber criminals. Our personal financial and health information are sensitive in nature and biggest danger of AI comes when these are misused by hacker. In order understand the misuse let us take a live example.

Johnson was suffering with skin disease due to immunity disorder he consulted the physician who advise certain medical test the system is so quick and efficient in pathological labourites that the johnson got the report online within few hours. He talks to the doctor on the phone call who advised to come next day morning. Johnson was not able to wait for knowing the result of test conducted and though of using AI and uploaded the test report on AI chatbot. The response way very quick and he got detailed information about his disease matter never ended there. Soon after johnson started getting advertisement related to his disease on his all-social media accounts. After sometime he was getting calls from different hospitals recommending treatment and hospitalisation johnson realise that his data has been leaked by AI chatbot.

It has been observed that personal data leakage can pose danger consequences sometime we use chatbot for solving the questions and also for data collection in order to process. In this case if the data collection is not safe or the tools for processing are not secured then your data can be hacked by cyber criminals.

Financial data is always in demand by the hackers such as bank account number credit card details email id aadhar number and address etc. the cyber criminals can use the data in creating your fake profile which can damage your credit profile. As per the 2020 report, around 4.35 billion data reach to cyber criminals through networking sites sometimes if chat bot data storage is not sufficient and inscription standards are not high than the data can be stolen.

Business data are quite sensitive like data of product launched business strategy and other financial details of the company post danger in case the same are loaded on AI chatbot.

AI whistleblower dies by suicide for ethics and values

We are talking dangerous of AI and the most dangerous situation came when a 26 years old Indian origin, former employee, Suchir Balaji artificial intelligence giant OpenAI died by suicide in Sans Frisco recently on 26 November 2024.

Balaji was known for whistle-blower against the AI giant OpenAI Balaji worked there for nearly four years his suicide came after three months when he publicly accused OpenAI of violating US copyright law while developing Chatgpt which is a generative AI program used by hundreds of millions of people across the world as money making sensation. In late 2022, the law suits were filed against OpenAI by authors computer programmer and journalist for illegally stealing their copyright material valuing 150 billion US dollar.

Nearly one month before his suicide, on 23rd October 2024 balaji gave an interview in New York Times openly saying that OpenAI is harming business and entrepreneur whose data were used to train Chatgpt. Balaji left OpenAI because he no longer wanted to contribute to technologies that he believed would bring society more harm than benefit. He said, “if you believe what I believe you have to just leave the company this is not a sustainable model for the internet eco system as a whole”. Earlier in a post on X in October itself Balaji said, “I initially did not know much about copyright, fair use etc. when I tried to understand the issue better, I eventually came to the conclusion that fair use seems like a pretty implausible defence for a lot of generative AI products, for the basic reason that they can create substitutes that compete with the data they are trained on”.

Suchir Balaji Indian origin young bright technocrat gave his life for the shake of professional ethics and values in the larger interest of mankind he was brought up in Cupertino and studied computer science at UC barkley. In absence of any suicide note reported so far, his mother has requested privacy while grieving the death of her son.