Tech

Don’t Panic: Swipe Safely for All Relationships

Dating app Tinder has added a panic button for dates that go wrong, as well as artificial intelligence to detect offensive messages. Both are important recent steps for making online dating safer for its 5.9 million subscribers.

But for the estimated 59 million people on other dating sites including Match, eHarmony and Bumble, these add-ons do not go far enough to protect those who are using profile-based matching platforms to form important relationships such as romantic partnerships, friendships, support and other connections.

The trend for friendship-based apps is growing. Many are using the model of “view profile and match approach” to enhance other aspects of their personal lives. Apps such as Bumble BFF, BarkHappy, Nextdoor and Adoptimist can help users find a new best friend, similar dog owners, new grandparents and parents seeking to adopt children.

To be sure, meeting people for non-romantic connections online is not new. Meetup, founded in 2002, connects groups of people with similar interests. Forming long-term platonic relationships based on profile matching is newer and matching sites need to account for the needs of people who seek non-romantic relationships.

I am a professor of computer science and automation researcher and understand that a technology’s design can result in appropriate use or disuse, but also misuse and abuse. Most of the focus on design is on using and adopting technologies, but features that protect users from misuse and abuse are still needed.

I also am using social media for connection. After complaining that I didn’t have local grandparents for my children’s “grandparents day” at school, a friend sent me an AARP article about the Facebook group “Surrogate Grandparents- USA.”

Children and older adults are two vulnerable groups, but there aren’t enough online safeguards to protect them.

The National Adult Protective Services Association states that one in nine older adults report abuse, neglect or exploitation, with prevalence of financial abuse. Seniors are targets of online scams that involve requests for money under false pretenses or making purchases of services that don’t exist.

Kik, the anonymous chatting platform popular with children and reportedly a haven for sexual predators, is shutting down.

While babysitting sites like care.com offer identity and background checks and references to help protect children, the surrogate grandparents Facebook group I wanted to join does not.

Facebook has a new app to allow someone to see who has info about them, but this is not enough protection for people who use the platform to meet people they don’t know, with their children. Privacy protections are important, but misuse and abuse are still risks.

Identity verification is one approach for mitigating misuse. For instance, the IT ministry of India is considering mandatory ID verification on Facebook, WhatsApp and Twitter, through a bill that proposes a publicly viewable mark of identification for users of the platforms.

Messanger Kids includes features that allow parents to monitor conversations their children have or select who they are allowed to chat with. This is a step in the right direction.

Yet these feature rely heavily on parents to educate their children about online safety and also for parents to have the capacity to monitor their activity. But in order for safety provisions to be effective, parents need to know which people are safe and how to evaluate the conversations they are reviewing.

Policies that criminalize digital harassment can protect people, but designs that prioritize safety are a more proactive approach.

As relationship applications reach more vulnerable populations, they need to prioritize features that verify identity and backgrounds, block harassers, and remind users of the safety limitations.

Safety needs to be a priority in tech design. If companies continue to develop safety features only after harm is done, citizens can advocate for policies to protect themselves and their families online before any harm happens.

Enid Montague is an expert of human centered automation in medicine, associate professor of computing at DePaul University, adjunct associate professor of general internal medicine at Northwestern University and a Public Voices Fellow through The OpEd Project.

Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.

Morning Consult