Emotibot’s conversation understanding technology fully understands the context, emotion, and intent of conversation, and can memorize the conversation as well as create user profile based on the conversation context. The signals obtained from the conversation is the most direct and useful information which can be used to generate insights, inference, and make personal recommendations.Emoti-Chat & Emoti-Brain enable developers of mobile applications, social platforms, and IoT devices to have their own chatbot, which can be highly customized to their needs. Everyone can have their own chatbot, every device and robot can have their own robot profile and personality through Emotibot’s easy-to-use API.
Chat with user like a friend, by remembering what user said, and what user likes/dislikes. Can recognize and memorize more than 25 attributes through conversation. Emotibot brain evolves quickly and learns automatically everyday.
Over 40 conversation components simulating how brain agents work. Emotibot’s brain simulated components, such as algorithms, learning models, and robot skills are updated automatically based on self-learning capability.
Can recognize 22 types of emotion from text and 7 types of emotion from voice.
Vision UnderstandingEmotion, Face, Object, Motion, Video
Emoti-Face provides facial recognition, expression, and emotion understanding capabilities. Emoti-Eye provides object recognition, such as clothing and regular objects, as well as real-time fatigue detection.
Based on the state-of-the-art deep learning methods, Emoti-Face and Emoti-Eye are now enabling 16 image processing applications.
Emote-Face’s accuracy is at the top of the world standard, with 95.63% accuracy based on FDDB,13% ahead of the most advanced company in China.Competitor 1: 84.68% | Competitor 2: 87.97%
Recognize 19 face features and analyze 7 different emotions
Recognize 8 categories of clothing with 80% accuracy
Emotibot will provide you the full-version vision understanding API. Emoti-Face & Emoti-Eye can help your image processing application with the most cutting-edge deep learning AI technologies.
Emotibot is working with MIT Media Lab and MIT professor Dr. Rosalind Picard on multi-modal emotion understanding of videos and images by developing state-of-the-art deep learning methods. Emotion recognition and understanding using multi-modal signals, including image, voice, and language, provides most accurate read and understanding of human emotion, feeling, and mood over a period of time or for a snapshot of expression.
22 kinds of text emotions from spoken language.
7 kinds of emotion recognition from facial expressions.
7 kinds of emotion recognition from voice recognition and understanding.
Through easy-to-use Emoti-Services API, users can upload a short video, soundtrack and text, and receive the overall emotion analysis based on the corresponding image, sound and text. The result of emotion analysis can be used in many scenarios which help understanding the human emotion, such as watching a video, looking at an act, looking at an ad, engaging in a meeting, attending a class, and customer service agent handling a service call, etc.
Enabling developers of mobile apps, social platform, and IoT to have their own customizable chatbot, robot profile, and personality through Emotibot’s easy-to-use API. With Emotibot’s sophisticated technology of understanding the topic, intention and emotion of the conversation, apps can interact with users like a friend.
Enabling 16 professional image processing applications, ranging from facial detection, emotion recognition, to clothing and specific product searching. It can be used in e-commerce, media and retail stores that are in need of visual information interpretation.
Affective Computing Multi-modal
Based on image, voice and text multi-channel emotion understanding, Emoti-Service can not only provide comprehensive analysis and insight of user’s emotion state and transition, but also react with the proper emotion.
Follow Emotibot on Wechat:
The first self-learning emotional robot that understands and remembers you.
End of Best Friends?
Emotional, personal, and truly knowing you
Talk like human with emotionIntent, emotional, and contextual understanding
Establish emotional connectionLong-term and short-term memory of preferences, habits and states.
Predict users’ real need and provide the right content and services just-in-timeSelf-learning and self-adapting
Emotibot connects people, devices, content, and services.
Affiliates & Partners
Emotibot Technologies Limited is founded by Kenny Chien (former Partner Engineering Director of Microsoft Asia), with co-founders Mr. Hu Yang and Mr. Ivan Xu. Emotibot is dedicated to creating a companion, based on A.I., affective computing, and state-of-the-art technologies based on neural science and deep learning, to enrich people's lives.