L1ght Saves Kids From Online Toxicity, Using Data Science And AI

DEPOSITPHOTOS ENHANCED BY COGWORLD

With increased connectivity comes increased concerns – especially for parents with children that are active online. 

Parents obviously want to shield their children from the horrific experiences we all hear, see, and read about. However, it takes more than just telling children to not share personal information to protect them from toxic online behavior such as bullying, hate speech, and sexual predators.

It’s a frightening new world online, especially for kids and the stats are eye-opening.

The need for a better all-encompassing solution becomes magnified when you consider the fact that oftentimes, it’s the children who are so tech-savvy their parents are unaware of their actions, much less the new online behavioral norms of gameplay or slang and worse yet many of those actions are difficult to trace.

Zohar Levkovitz, a serial entrepreneur who sold Amobee for over 350 million USD, and went on to become a Shark Tank star in Israel, joined forces with cybersecurity leader Ron Porat to save kids’ lives using data science.

 

CEO – L1ght

Zohar Levkovitz

I started L1ght because as a father, I suddenly became aware of the dangers my children were facing online and how these nefarious characters changed their tactics continuously to stay under the radar. With the advent of deep learning – we have the ability to identify these conversations and predict something toxic is about to happen, whether through text, audio, video or image.

Using AI to Keep Kids Safe

Solutions to date have been mostly focused on consumer-oriented parental control applications. L1ght takes a different approach as an end-to-end solution for popular apps and games that works directly with the social network or mainstream game as part of their infrastructure. That way, if Minecraft were to plug L1ght into their platform, they would be able to defend millions of kids at once.

After 2 years of research with a team of world-class PhD’s, data scientists and cyber experts, L1ght has developed a platform utilizing deep learning to analyze text, images, video, voice and sound in real-time as it aims to define the “Anti-Toxicity” category.

While other solutions work using triggers of harmful language using a dictionary, L1ght’s technology assesses toxic behavior over time, and in the exact context.

Delivering Results and Catching the Bad Guys

Within the first quarter of its launch, L1ght (formerly AntiToxin Technologies) made headlines for removing over 130K pedophiles from public groups on WhatsApp. During that same time period, the company convinced Google and Facebook to purge apps that were monetizing links to questionable WhatsApp groups and later Bing also removed underage porn from its search results as part of L1ght’s recommendations.

L1ght’s technology can spot the difference between a consensual exchange of intimate photographs between adults on a messaging app, versus an adult “grooming” a minor.

The technology can also pick up on different nuances in textual exchanges, such as when teenagers are throwing ‘fighting words’ at one another while competing in a game, versus if one teenager is repeatedly victimizing another in a harassing manner.

This type of technological development is light years ahead of today’s technology that uses dictionary blacklists of forbidden words – a standard tracking method still used by many online forums as a low-level way to have some degree of control over discussions online. However, it doesn’t take much for users to collectively find a way around it.

For instance, if the word “purple” gets blacklisted for some reason, people would still type other variations of it, such as “prpl” or take a jab at the forums altogether by elaborately writing out things such as, “The forbidden color that shall not be named.”

With all this in mind, L1ght continues its pursuit to act as an online guardian and be seen as the ultimate stamp of approval for child safety in the tech arena.