Hi,
I'm Tom.

I'm a Python web developer living in Shanghai. This is my personal website and blog.

Blog  ·  Github  ·  Twitter
Recent (Full archive →)
  1. Build a simple protocol over TCP
  2. Cassandra: A Journey of Upgrade
  3. Ramble on Java & Session
  4. Cassandra: Create a cluster on your local machine
  5. Simple Guide to Install StatsD and Graphite

Site designed by @orourkedesign.

Build a simple protocol over TCP

Disclaimer: I am not an expert of TCP or designing protocols, this post is just about my learning experience of building a protocols over TCP :)

A rookie mistake

When I was playing with sockets. A rookie mistake I made is assuming that each message send implies a message receive, like the following example:

server.py

1
2
3
4
5
6
7
8
9
10
11
import socket

sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind((socket.gethostname(), 2333))
sock.listen(1)
connection, address = sock.accept()
while True:
    data = connection.recv(1024)
    if not data:
        break
    print "Received: %s" % data

client.py

1
2
3
4
5
6
7
import socket

socket_address = socket.gethostname(), 2333
connection = socket.create_connection(socket_address)
connection.send("Hello there!")
connection.send("Bye bye!")
connection.close()

Run the client. I got the following output:

1
Receive: Hello there!Bye bye!

.

So two sends result in one receive, not two receives as expected. Hah. This is a misunderstanding of how TCP works.

TCP is a stream oriented protocol, not a packet/message oriented protocol like UDP. I’d like to use this analogy: TCP is like making a phone call, a connection must be established before both end is able to talk, and when you talk, data stream flows on the connection. While UDP is like you’re sending a text message.

The boundary

However this rookie mistake got me thinking, when we’re building an application on top of TCP socket, for example, a chatting application, how do we know where each message ends since they are a stream of data? Where’s the boundary of two messages? There must be something up on the application level.

1. Delimiter

Back to the phone call analogy, let’s say foo is reading a poem to bar over the phone, how does bar know when foo finishes a line? how does bar know if foo finishes the whole poem? Does the wired connection do that for you? NO. But what we know from common sense is that, there’s a pause when you finish a line, and maybe a longer pause when you finish the poem. Similarly, maybe we can put a pause in the end of each message? Just like \r\n in HTTP headers.

Here is an improved version of the previous code using \r\n as the delimiter:

server.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import socket

sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind((socket.gethostname(), 2333))
sock.listen(1)
connection, address = sock.accept()

while True:
    data = connection.recv(1024)
    if not data:
        break
    else:
        for line in data.split("\r\n"):
            if line:
                print "Received: %s" % line

client.py

1
2
3
4
5
6
7
import socket

socket_address = socket.gethostname(), 2333
connection = socket.create_connection(socket_address)
connection.send("Hello there!\r\n")
connection.send("Bye bye!\r\n")
connection.close()

Now we got the separate output:

1
2
Received: Hello there!
Received: Bye bye!

The downside of this approach is that, when dealing with a message that is longer than 1024, you just get part of the message. We might need a buffer to receive message until we get a delimiter.

2. Fix length or Prefix length

What if messages are all in fix length? Short message can be filled with empty string, something like:

1
connection.send("Hello there!".ljust(140))

So server just need to keep reading fix length of bytes from socket. This works. However there is still a hard limit on the length of the message.

What if we tell the server the length of each message beforehand? We can do that by prefixing the message with the length of it. Yes! Just like the “Content-Length” header in HTTP.

1
2
3
make_message = lambda x: str(len(x)).ljust(4) + x
connection.send(make_message("Hello there!"))
connection.send(make_message("Bye bye!"))

Here we prefix each message 4 bytes string indicating the length of the message. And server will first read the 4 bytes to get the length, then read as much bytes as that. The recvall function is to get the certain length of data, otherwise with simply recv, there’s a chance we get just part of the transmitted data. Although in local machine the chance is low.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
connection, address = sock.accept()

def recvall(conn, remains):
    buf = ""
    while remains:
        data = conn.recv(remains)
        if not data:
            break
        buf += data
        remains -= len(data)
    return buf

while True:
    data = recvall(connection, 4)
    if not data:
        break
    length = int(data)
    message = recvall(connection, length)

    print "Received: %s" % message

At this point, we have something like a protocol over the TCP layer, which is able to achieve the original goal.

Native protocol of Cassandra

Now that we have a protocol of our own, although simple and naive, I’d like to take a look at some serious protocol that built on TCP. Since I’ve been working with Cassandra a lot lately. I might as well just check their protocol.

CQL is the protocol of Cassandra, which is built on TCP:

The CQL binary protocol is a frame based protocol. Frames are defined as:

  0         8        16        24        32
  +---------+---------+---------+---------+
  | version |  flags  | stream  | opcode  |
  +---------+---------+---------+---------+
  |                length                 |
  +---------+---------+---------+---------+
  |                                       |
  .            ...  body ...              .
  .                                       .
  .                                       .
  +----------------------------------------

Frames can be regarded as what we called messages in previous examples. Except the first 32 bits, the length and body part is just what we used. So our approach looks practical.

So that’s it, there must be more technical details regarding building a full-fledged protocol, but some fundamental things should work the same.

 

Cassandra: A Journey of Upgrade

For the past few couple months, a huge burden on my shoulder had been upgrading our Cassandra cluster from 1.2.6 to 2.1. I’ve been investing a lot of working hours to figure out the solution. Now that it has been done, I feel it is worthwhile to write down the whole experience.

Why Upgrade?

Actually the imperative reason is that we need transaction support in one of our services. And Cassandra 2.0 introduced a new feature called light-weight transaction, although it is light-weight, it somehow can fix our issue.

Besides that, there are also a couple of new features we can benefit from the upgrade:

Infrastructure

Upgrade Path

Driver upgrade

We’re using a fairly old driver Cassandra called Pycassa, which is no longer maintained. And it is based on thrift protocol, which is deprecated/ditched in the version 3, so all the new and good stuff on the native protocol has nothing to do with Pycassa. Very naturally we switched to the recommended/official driver maintained by the Datastax.

Internally we don’t have a layer for Cassandra, so refactoring is a lot of pain. We have to replace all the code usages of Pycassa among all services, and carefully update all unit tests.

We also bumped into some issues when deploying with the new driver:

The driver upgrade is not as smooth as I thought. A lot of back and forth happened and it took us almost two month or so to ship the upgrade.

No rolling upgrade?

Rolling upgrade should be a default option for a cluster upgrade. But unfortunately it is not supported between major versions of Cassandra. As it is documented here. We thought about workarounds. Like building a new Cluster and syncing data between two clusters. But building a new cluster is not our option due to some “policy”, so we decided that we can tolerate some downtime, and that also means we will update each Cassandra instance in place.

Data backup and restore

It’s important to have a backup of the data. In case something goes wrong, we can go back to the save point. When doing data backup, we demand that all services that access Cassandra should be stopped and keep data untouched during the process.

Below is a typical structure of one of our Cassandra nodes:

/mnt/cassandra/

── commitlog_directory

── data_file_directories

“data_file_directories” is where Cassandra data files live, our goal is to backup this directory. We’ll do a ‘nodetool drain’ on the node, which will flush all memtables to data files. After that We’ll pack data_file_directories into one tarball and upload it to the cloud(to prevent disk failure of node). So we’ll have two copies of data.

Procedure:

If something goes wrong and we want to abort the upgrade and go back to the old version. We simply retrieve the old data and unpack it to the Cassandra data file directory.

The backup and restore procedure are automated by Ansible scripts.

Upgrade

Upgrade directly from current version 1.2.6 to 2.1 is not possible. Since pre-2.0 SSTables are not supported by 2.1. A direct upgrade to 2.1, Cassandra would fail to start and following error would be raised:

1
2
3
4
5
6
7
8
9
10
11
java.lang.RuntimeException: Incompatible SSTable found. Current version ka is unable to read file: /var/lib/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-ic-1. Please run upgradesstables.
        at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:443) ~[apache-cassandra-2.1.1.jar:2.1.1]
        at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:420) ~[apache-cassandra-2.1.1.jar:2.1.1]
        at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:327) ~[apache-cassandra-2.1.1.jar:2.1.1]
        at org.apache.cassandra.db.Keyspace.<init>(Keyspace.java:280) ~[apache-cassandra-2.1.1.jar:2.1.1]
        at org.apache.cassandra.db.Keyspace.open(Keyspace.java:122) ~[apache-cassandra-2.1.1.jar:2.1.1]
        at org.apache.cassandra.db.Keyspace.open(Keyspace.java:99) ~[apache-cassandra-2.1.1.jar:2.1.1]
        at org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:558) ~[apache-cassandra-2.1.1.jar:2.1.1]
        at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:214) [apache-cassandra-2.1.1.jar:2.1.1]
        at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:443) [apache-cassandra-2.1.1.jar:2.1.1]
        at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:532) [apache-cassandra-2.1.1.jar:2.1.1]

So we upgraded to 2.0.0 and run upgradesstables command to upgrade SSTables. After that, we then upgrade from 2.0.0 to 2.1.

Cassandra has an internal version for SSTables. During the upgrade, sstable version will be bumping from:

1
ic (1.2.6) --> ja (2.0.0) --> ka(2.1.3)

Procedure:

The procedure looks simple and clear. While we had couple issue when doing test upgrade:

Data consistency

How to ensure data are not corrupted during the upgrade? I think this should be guaranteed by Cassandra. However when doing upgrade testing, we have a script to dump all Cassandra data before and after the upgrade to ensure data are not touched. This step is taken away when we’re doing actual upgrade.

 

Ramble on Java & Session

Recently I’ve been working on some Java stuff, from JSP, Spring MVC to Hibernate. It’s actually not such a smooth and comfortable switch from Python to Java, especially when building a website with frameworks. A hello world page would takes more efforts in Java comparing to Python. Part of the reason may be I’m quite a novice in the Java world, the learning curve of which is steeper than that of Python.

However the previous experience on Python web development is not for nothing, to some extent, it helps me understand the concepts in Java web. I always try to find equivalents in Python when coming across a new thing in Java. For example, Tomcat/Jetty, the Java servlet container, is somewhat equivalent to WSGI containers like uWSGI, and servlet, is somewhat equivalent to WSGI. Hibernate is something like Django ORM/Sqlalchemy. Spring AOP is like decorators. Spring Controllers is somewhat like Flask views. Although there’s some concepts I can’t find analogies to, like dependencies injection, IoC. Python seems to be able to achieve those with the language support.

During my exploration into Java web, I started building a very simple todo list app on Spring MVC, just to get my hands dirty with this famed framework. When working on a simple login function for the app, I came to think how Spring handle sessions. Down below, I use Spring Security for authentication and authorization. Again, I find it is somewhat equivalent to auth/session app in Django.

As I recalled, Django supports couple session backends, by default it uses DB to store sessions. When clients first visited, a new session is created in the session table, where all session data for that single session is stored and session keys are returned in cookies. With this session key in cookie, separate requests can share data and relate to each other, as they’re in a same session. When clients are authenticated, a flag is set in the session to prevent further authentication.

Apparently this is not how Spring security session works, as they don’t have any table created. Another session implementation I remembered is the one from Flask, which uses secure cookie from werkzeug. This implementation stores user’s session data(no session key in this case) in cookie. Session data is serialized and a checksum of the data is appended before sending back to client. Checksum is checked to make sure data is not tampered. However after inspecting the cookie, there’s a only a cookie called JSESSIONID, which should be the id of a session, and skimming through some code of Spring security, this doesn’t look like the approach adapted.

So where the hell is session stored in Spring? After some googling around, I learned that A) Session is a low level api implemented in servlet container B) Tomcat stores session in memory! A little bit surprised, session is not persisted. Nevertheless in respect of performance, in memory store is absolute a winner. But the problem is also obvious, What if server crashes? What if there’s a cluster of servers? How does it scale?

Then I learned that, to distribute session with a cluster of servers, Tomcat supports session replication. And there’s also a solution called “sticky session”. A term never heard of before. But in fact it is just a load balance strategy that route the same client to the same server so that the client is sticked to that server, the session is kept. However as to the scenario that single server crashed, I’m not sure how Tomcat failed over that. Maybe just failed that, session is never meant to store persistent data.

Tracing back to the time when I was working with Django, We tend to use a different session store other than database, such as Memcache or Redis. Rereading the Django documentation on session, I found it also supports local memory, but not recommended. Tomcat also supports different persistent storage.

So I’ve mumbled so many things about session. That is what really get me started on this post. But what I was trying to convey is that, when switching to different tech stack, surprise is not bad, as we may find that our understanding of things is not that accurate or simply wrong. But just like the analogies I make, the philosophy behind things might be the same. Dig that.

 
View the full archive →