Skip to content

Omi's AI companion user personality analysis. #1833

Open
@knbww

Description

Problem:

People often feel lonely or overwhelmed in unfamiliar situations (travel, emergencies, complex decisions), but do not have an accessible tool that:

Analyzes their environment in real time.

Understands their personality and emotional state.

Gives personalized advice by combining data about the user and the external environment.

Existing solutions (chats, navigators) work in isolation and do not adapt to the unique needs of a person.

Proposed solution:

Omi as a “Digital Companion” with extended perception:

Multimodal analysis:

Camera + AI vision:

Scans the environment (e.g. recognizes dangerous objects in an emergency, reads signs in a foreign language, estimates crowd density).

Analyzes the user's non-verbal cues (facial expressions, posture) to assess stress or fatigue.

Voice/text: Records emotional patterns in speech.

Contextual support:

When traveling: suggests routes based on the user's preferences and the current situation (e.g. avoids noisy areas if Omi detects an alarm).

In an emergency: generates step-by-step instructions (e.g. finds the nearest exits in case of a fire after seeing the evacuation plan on the wall).

Personalization via the "Personality Diary"

Omi creates a dynamic user profile, taking into account:

Values ​​(e.g. environmental friendliness).

Behavioral habits (risk-taking/cautionary tendencies).

Social connections (analysis of the frequency of communication with friends, reminders of important dates).

Ethical protection:

"Invisible observer" mode: the camera is activated only with the explicit consent of the user.

Local data processing: sensitive information (e.g. friends' faces) is not saved in the cloud.

Alternatives and their disadvantages

Existing AI assistants (Siri, Alexa): Do not analyze the environment and emotions.

Psychotherapeutic chatbots (Replika): No connection to the physical world.

Navigators (Google Maps): Does not adapt to the user's personality.

Additional context:

Use cases:

Scenario 1: Omi notices that the user is lost in an unfamiliar city and suggests a route to the hotel through quiet streets (given their tendency to get anxious in crowds).

Scenario 2: In a conflict with a friend, Omi reminds: “Last time you regretted your harsh words. Do you want me to find neutral wording?”

Technologies:

TinyML for on-device data processing.

Computer vision (OpenCV, YOLO) + NLP (GPT-4 for dialogue).

Result:

Omi becomes not just an "assistant", but a prosocial agent that combines analysis of the external world and the user's internal state to reduce stress and increase decision awareness. The key "feature" is the symbiosis of technology and empathy.

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions