Trust & Safety at Bitly

Trust and Safety at Bitly means ensuring that our users can safely and confidently interact with the links and QR Codes created on our platform. Our trust and safety team has been quietly working towards that end for some time, and we are now embarking on an effort to be more transparent about what and how we do what we do every day. In this post, we’ll introduce you to who we are, the scope and challenges of our work, and what you can expect from us moving forward. 

Who We Are

We are a cross-functional group working across engineering, legal, product, design, customer success, and other parts of the organization. To a person, we have boundless energy in pursuit of our mission to balance internet safety with the right to free expression, and we are constantly seeking ways to be better at our jobs than we were yesterday.

What We’re Facing

If you’ve found your way to this blog post, chances are that you’ve encountered a spam link, and maybe one that was shortened using Bitly. (We get them ourselves from time to time and, trust me, it annoys us as much as it does you.) The reality is we deal with a wide range of abuse on our platform beyond just spam. In addition to the usual suspects you might expect, such as phishing and malware, given the size of our platform and the use of Bitly across the web, we also sometimes encounter truly detestable content, such as hate speech, and violent extremism, and more. We are constantly evolving how we detect and eliminate harmful content from links and scans, and we’ve been leaning in heavily with partners to help us along the way. 

How We Identify Abuse

Not only do we face a wide variety of types of abuse, but we also do so at a scale that represents an enormous technical challenge and a big responsibility for protecting our users. There are millions of links and QR Codes created by Bitly users every single day, and that translates into billions of clicks or scans per month. While abusive activity is a very small percentage of that overall volume, the impact of even one abusive link or QR Code can be large. Given this challenge, we take a multifaceted approach to identifying and addressing abuse on our platform. We do this through a combination of third-party vendors specializing in abuse detection, trusted tech partners, NGOs, and our patent-pending internal technology. 

We also take our responsibility to balance free expression with safety seriously. Just like so many other companies battling online harm, we endeavor to be both accurate and consistent. As those who intend harm change their tactics and technology, we will investigate all reports of abusive content and adjust our approach or adapt our technology accordingly. 

What’s The Latest and What’s Ahead For Bitly

Moving forward, we will have more blog posts about what we’ve done for trust and safety and what plans we have on the horizon. In that spirit, we are excited to announce that we just launched a Trust Center, which serves as a one-stop resource for all of Bitly’s trust and safety-related policies, resources, tools, and tips. We also recently published an Acceptable Use Policy to provide a clear set of rules about the activities and behavior we do not allow on our platform. As both our platform and the Internet evolve, so too will our policies, and we will keep you informed when we make changes to the rules governing the use of our platform. 

Looking forward, we have a lot of exciting projects on the roadmap! A couple of examples include working to streamline the process for users to report abuse to us as well as to submit appeals in the event we’ve made a mistake.

Stay tuned!