AI has been all over the news for quite some time. There are many fantastic claims about what it can do. And students hoped that ChatGPT would help them complete college assignments. However, it is not the case. It is always better to turn to professional writing services. To find the best, students can read reviews on NoCramming and make a wise choice of a provider. It is still relieving to find so-much-needed support.
However, having a reality check regarding AI’s actual failures and successes is crucial. That’s what this article is all about.
Although some expect this technology to solve all human problems, from content creation to treating cancer, there have been significant failures. The over-optimistic perspective on artificial intelligence might be rushed.
ChatGPT Got Stupider
One of the main expectations of AI is that it is going to develop and become more intelligent with time. As soon as ChatGPT was released, many people were hyped about it.
It can write code, answer questions, and even create academic assignments. Many students were hopeful that it would help them with their studies. However, it has yet to be the case.
According to a recently published paper by Stanford and Berkeley scientists, ChatGPT got stupider in a couple of months. The accuracy of identifying prime numbers went down from 97.6% to 2.4%! The code-writing skills are also compromised. As for academic writing, it has never been good at it to begin with.
ChatGPT cannot come up with unique ideas; it generates a response based on the data it has. So it cannot provide an authentic paper with an intricate argument. All it can do is a generic text with little to no value.
No one is better than humans when it comes to creating authentic and compelling texts. It might be disappointing to students to learn that ChatGPT won’t do college assignments for them.
Professional academic writers can help with all types of college assignments, whether it is homework assistance or dissertation editing. And expert writers will create authentic and thought-provoking texts. So if you are in need of assistance with college papers, check NoCramming to find the best services on the market.
Healthcare AI Can Be Dangerous
Another considerable claim is that AI can improve healthcare. Although it sounds fantastic, the implemented solutions have mixed results. So far, there are 40 FDA-approved AI applications. Some of them are able to automate specific tasks; others can be useless or even dangerous.
For example, the IBM project of using AI to fight cancer failed. The program gave doctors advice to provide medication to patients that increases bleeding while patients experienced severe bleeding in the first place. Such suggestions could be fatal if doctors were to follow them.
Other issues with AI healthcare solutions include:
- Different performance in different facilities (works well in one and flops in a foreign hospital);
- Software implemented in the US has developed a bias against minorities;
- Sometimes a solution makes a diagnosis or prediction based on the model of MRI used;
- Incorrect suggestions are common, which beats the whole purpose of such systems.
Image Recognition Accuracy Is Subpar
Image recognition is one of the first applications of AI technology. But even after almost 20 years of work on it, it is still far from ideal. Yes, it got significantly better over time. But it regularly fails too.
In 2019, scientists from Berkeley, the University of Washington, and the University of Chicago worked on an exciting project. They’ve gathered a dataset of unedited nature pictures. As a result, the image recognition algorithm was wrong 98% of the time when analyzing those pictures.
AI Learns the Worst Behaviors
One of the most notorious failures of this technology has to do with Tay, a Microsoft chatbot. The creators presented it as incredibly advanced. It was released to roam Twitter for 24 hours. After that, the chatbot declared, “Hitler was correct to hate the Jews.”
Collaboration with plenty of raw data spoiled an algorithm that was perfect in a lab environment. It parroted the worst human behaviors and became able to hate speech.
Gender Discrimination Still Takes Place
Amazon used AI to automate the hiring process. It seemed pretty harmless to let the technology select candidates. Later it was revealed that it favored white male candidates. Women candidates were mostly ignored.
It shows that AI is never clear from biases because humans teach it. And if humans that create datasets and programs are misogynistic, the system will have the same bias.
Although the failures are serious, there are some good things to come out of this technology.
DALL-E is a successful project that can create images based on the provided text. It works even with abstract and absurd prompts. So far, its performance has been nothing but superb. One of the main advantages of this solution is that it can learn from an unseen text.
Data labeling is a time-consuming process necessary for computer vision and NLP projects. For algorithms to learn, researchers must provide large amounts of labeled data.
CLIP is a wonderful solution that does data labeling fast and with a significant correctness rate. Instead of recognizing objects in the picture, it provides a description. Even when working with a system designed to confuse data-labeling AI, CLIP had a 77.1% accuracy.
One of the most promising solutions so far is Stretch, a robot powered by AI. Boston Dynamics created it for warehouse management. It has a grabbing arm and various handy sensors. It also uses computer vision.
It can do potentially dangerous tasks instead of human workers, which is excellent.
AI is a powerful technology in theory, but it is less developed than some people are willing to claim. Although it can perform specific tasks and take over dangerous or monotonous activities, it is not free from bias.
It learns from the creators and datasets provided. It can develop gender and racial bias. It can also be dangerously incorrect in terms of patient diagnosis. Also, AI cannot compete with humans when it comes to academic writing.