AI App Predicts Conflict Between Couples Before It Occurs

If you wish to avoid conflict with your partner, take a look at the soon-to-be app from the University of Southern California (USC).

The new smartphone app uses artificial intelligence (AI) to analyze language patterns and certain physiological signs in order to predict conflict between couples. Using this app, couples could avoid arguments and spend their time on something more productive.

Previous conflict-monitoring experiments involved real-life couples and tightly controlled settings of psychology labs. This one took a completely different approach: USC researchers studied couples in their normal living environments using wearable devices and smartphones to collect data. Their early findings suggest that the combination of wearable devices and AI based on machine learning could lead to the successful development of “app-counselors”.

Adela Timmons, a doctoral candidate in clinical and quantitative psychology at the USC explains that current models can detect conflict occurring but cannot predict it before it happens. That’s why the new AI app is the first of its kind. “In our next steps, we hope to predict conflict episodes and to also send real-time prompts, for example prompting couples to take a break or do a meditation exercise, to see if we can prevent or deescalate conflict cycles in couples,” Timmons said.

The testing showed that the AI app is able to accurately predict conflict 79.3% of the time, which is good but not good enough. For this reason, the researchers plan to collect additional data to boost the accuracy of the algorithm.

Source:

IEEE Spectrum (http://spectrum.ieee.org/the-human-os/robotics/artificial-intelligence/algorithm-aims-to-predict-bickering-among-couples)

Digital Trends (http://www.digitaltrends.com/cool-tech/ai-app-couples-fighting/)

OpenAI Creates Unsupervised Sentiment AI

Today, most of the artificial intelligence (AI) machines use machine learning in order to learn and understand the world around them. As we all know, this type of learning relies on a particular data set and established rules that are fed into the machine. Later on, the machine becomes able to predict certain outcomes based on that information; in essence, the algorithm learns to learn without being explicitly programmed that way. Now, researchers from OpenAI created a machine to predict the next character in the text of Amazon reviews that later developed into an unsupervised system able to learn representations of sentiment.

The new AI system was only trained to predict the next character in the text of Amazon reviews, so the researchers were surprised to discover that the system developed into an unsupervised one, able to learn an excellent representation of sentiment. The OpenAI team says that this phenomenon is probably not specific to their model though, but is instead a general property of certain large neural networks.

The fact that this AI system was able to learn unsupervised is a major breakthrough for artificial intelligence in general. This could mean that soon, machines could be able to learn almost completely by themselves, decreasing the need for training, use of large data sets and time necessary for them to learn certain tasks.

Source:

OpenAI Blog (https://blog.openai.com/unsupervised-sentiment-neuron/)

Futurism (https://futurism.com/ai-learns-to-read-sentiment-without-being-trained-to-do-so/)

AI’s Help For Detecting Fallacious News

In West Virginia, students at WVU Reed College of Media are not patient with Google or Facebook to find the solution for fake news. Computer Science major students are cooperating to create an AI that can detect fake news. The AI is going to analyze the text of the news and score the article to determine whether or not it is fake.

The purpose of this AI is to have the hard work done instead of humans. The AI will have the same level of information, and AI will be able to analyze every day without stopping. The bias in a human analysis will also dissipate because AI won’t have feelings.

According to Center’s Creative Director Dana Coester, “[fake news] is also a social and political problem with roots in technology.” The development of AI also allowed students in computer science to have more of a creative class compared to classes in previous years.

Source: https://www.sciencedaily.com/releases/2017/03/170327143654.htm

AI Predicts Whether People Will Think Your Photo Is Awesome

A startup called Everypixel has developed an innovative way to help you make the best out your photography. The company has trained a neural network to “see the beauty of stock photos in the same way as you do”, meaning you can now ask neural network whether people will think your photo is good awesome.

By mixing artificial intelligence and photography, Everypixel is trying to change the ‘unfair’ situation in the stock photography market. Basically, thanks to their new tool, the startup will bring stock images from every existing stock photo agency onto one place, providing the user with everything they need to make an informed decision where to buy photos.

The tool in question is called Aesthetics tool, and although still in beta testing, it’s apparent it’s going to be amazing: it will allow customers to upload photos, get auto-generated tags and a percentage rate on the “chance that this image is awesome”.

This is how the startup has taught AI to see the images as we do: first, they consulted with designers, editors and photographers to help them generate a training dataset, and then they gave neural network an example to follow. The system then processed both positive and negative patterns, learning along the way to see the beauty in images just like human beings do.

In essence, the system is designed to help users create/buy the best possible images. If that’s not something you’re interested in, then at least you have an opportunity to ask an artificially intelligent machine what it thinks of your image.

Source:

Everypixel (https://everypixel.com/aesthetics)

Digital Trends (http://www.digitaltrends.com/photography/everypixel-aesthetic-ranks-photos/)

Baidu Taught Its AI the Same Way a Human Teaches Their Child

Chinese tech company Baidu has achieved a major milestone – its researchers taught an artificial intelligence (AI) system that “lives” in a 2D environment how to move through its world using natural language commands – you know, those commands that mimic human learning.

We were all taught in the pretty much the same way – our parents showed us images, they repeated words, used various examples, and of course, positive and negative reinforcement so that we can associate their words with images and examples. Now, Baidu’s researchers managed to do a very similar thing with their own AI system. Using positive and negative reinforcement, they managed to teach a virtual agent things that the agent can now apply within its own system and to new situations.

This – applying past knowledge to new situations – may seem like no big deal for us because we’re so used to it, but to AI, this is a major breakthrough. Here’s what the Baidu research team had to say about this milestone:

Although machines may know what a “dragon fruit” looks like, they can’t perform the task “cut the dragon fruit with a knife” unless they have been explicitly trained with the dataset containing this command. By contrast, our agent demonstrated the ability to transfer what they knew about the visual appearance of a dragon fruit as well as the task of “cut X with a knife” successfully, without explicitly being trained to perform “cut the dragon fruit with a knife.

The team now plans to extend their study to a 3D environment, so they can ultimately use this technique to create AI that’s more practical for real-life applications.

Source:

Baidu Research (http://research.baidu.com/ai-agent-human-like-language-acquisition-virtual-environment/)

TechCrunch (https://techcrunch.com/2017/03/30/baidus-ai-team-taught-a-virtual-agent-just-like-a-human-would-their-baby/)

Is Artificial Intelligence Slowing Down The Growth?

1401x788-AI-opener

Today, many companies are investing in artificial intelligence, because of the speed AI is growing today. The former head of Uber AI Lab, Gary Marcus says that the artificial intelligence is not moving as fast as many people think. Marcus is afraid of computers being stuck where computers can no longer produce anything more for AI.

Ilya Sutskever, research director of OpenAI, said that AI is very close to being on the human level. Sutskever had said, the AI is growing at a very healthy pace.

Marcus, who is a professor at NYU and former head of AI lab said that companies shouldn’t be investing on AI too much, which can cause a problem of researchers losing the sight for the future of AI. However, in the past, the lack of progress in AI causes lack of investment for AI. If companies invest in AI today, the money earned by AI will not be as much as companies think. However, if AI doesn’t get stuck with its growth, investing the companies in the future may be a better solution.

Source: https://www.technologyreview.com/s/603945/is-artificial-intelligence-stuck-in-a-rut/