Wireless Embedded Internetworking Short Course

David E. Culler

University of California at Berkeley

Department of Electrical Engineering and Computer Science

 

 

Lab 8 TCP

 

 

 

In previous labs we have built up complete and powerful wireless sensing applications using UDP to transmit datagrams over the embedded network, as well as whatever portions of the conventional network we choose.  This is a powerful framework and the 6LoWPAN adaptation layer makes it as efficient as any of the link-only proprietary or industry standard protocols which provide no systematic interoperation with the rest of the internet.  Essentially any application protocol or object representation can be layered over this. 

 

However, the internet architecture also defines a reliable stream transport TCP/IP.  It uses IP datagrams and end-to-end reliability mechanisms to provide a reliable byte stream between pairs of hosts, regardless of the mix of link media that are on the intervening route.  Most of the applications on the internet are built on TCP, including the web, ftp, email, ssh, and so on.  One powerful aspect of the internet architecture is that it provides the primitives and the tools for solving networking problems, but does not dictate the solution. 

 

In this lab, we will look at TCP/IP for our embedded devices.  We begin with reviewing the BSD sockets API for TCP/IP on conventional hosts.  Please take a look at the TCP examples in unix/ipv4 and unix/ipv6.  TCPclient.c and TCPserver.c are essentially the peers in the echo protocol. 

 

TCP Client

 

The client side is similar to what we have seen before.  It sets up a sockaddr structure, either in the IPv4 or IPv6 address family.

 

  server.sin6_family = AF_INET6;

  if (inet_pton(AF_INET6, argv[1], &server.sin6_addr) <= 0)

    error("Invalid IPv6 address\n");

server.sin6_port = htons(port);

 

and creates a socket to get a handle on the stream.  Notice, TCP is indicated by the SOCK_STREAM parameter..

 

  if ((sockfd = socket(AF_INET6, SOCK_STREAM, 0)) < 0)

    error("ERROR opening socket\n");

 

then is connects to the server. This is, of course, the big difference.  TCP is connection oriented, like a virtual circuit.  It is primarily a point-to-point relationship.  It is not typically used like your UDP listener which just collects packets coming from a whole host of places. 

 

  /* connect to server on port */

  if (connect(sockfd, (struct sockaddr*)&server, sizeof(server)) < 0)

    error("ERROR connecting\n");

 

It can then write to the stream, here using send, and read responses, here using recvfrom.  A stream of bytes goes in one direction.  A stream comes back in the other.  The client and server layer an application protocol over the stream.  Such a protocol defines what is sent over the stream and what needs to be received before something is sent back.

 

  for (;;)

    {

      /* get message to send */

      printf("enter message: ");

      memset(buf, 0, sizeof(buf));

      fgets(buf, sizeof(buf), stdin);

     

      /* write message to socket */

      if (send(sockfd, buf, strlen(buf), 0) < 0)

            error("ERROR writing to socket");

     

      /* read response from socket */

      memset(buf, 0, sizeof(buf));

      if (recvfrom(sockfd, buf, sizeof(buf), 0,

             (struct sockaddr*)&from, &fromlen) < 0)

           error("ERROR reading from socket");

     

      printf("        reply: %s", buf);

    }

 

Beneath this application level exchange is the TCP protocol, which breaks the stream into segments, provides windows, timeouts, retransmission, duplicate detection, and all those things that we study and teach in conventional networking.

 

TCP Server

 

The server side is quite a bit more complex and serves to illustrate the relationship between networking paradigm and operating system concurrency model. 

 

We create a socket to have a handle on the stream

 

  /* create TCP/IPv6 socket */

  if ((sockfd = socket(AF_INET6, SOCK_STREAM, 0)) < 0)

    error("ERROR opening socket\n");

 

but when we set up the addressing structure we don’t know who will be communicating with us.  We bind to a port so that the client can find us.  We are a service on that port.

 

  /* setup destination sockaddr */

  server.sin6_family = AF_INET6;

  memcpy(&server.sin6_addr, &addrany, sizeof(addrany));

  server.sin6_port = htons(port);

 

  /* bind to port */

  if (bind(sockfd, (struct sockaddr*)&server, sizeof(server)))

    error("ERROR binding to socket\n");

 

When we call listen.  The second parameter to listen indicates how much queuing the kernel should do on this socket on our behalf.  This is one little indicator of the deep relationship between buffer management and communication.  Buffers provide elasticity and hide round trip delays.  They improve our ability to utilize available bandwidth.  There are a place to put those concurrent activities, e.g., messages, that we can’t deal with immediately.  But, they consume storage – and on embedded nodes storage is what is in very short supply.  Also, the amount of storage depends on the rate at which things come in versus what we can service.  Here, we may have many clients pounding on us, so we have no way to control the load.  We can queue a few.  The rest have to be rejected.

 

  /* start listening for connections */

  if (listen(sockfd, 1) < 0)

    error("ERROR listening to socket\n");

 

The really import step is accept. When this call is made our thread goes to sleep until something comes in from a client.  (Actually, since they are performed sequentially, it is hard to tell whether the thread blocks on the listen or the accept.  In any case, it blocks until there is a connection to accept.) If we had other events to service, we would have need to spawn a thread to handle them prior to this call.  In addition, when a connection is accepted, it is given a new client.  The main reason for this is to manage concurrency.  If the server is to service multiple simultaneous requests, it can fork a thread for each one and hand the socket for the particular client to that service thread.  The main thread of the server can keep accepting connections and farming them out to clients.  Here we have a single thread and serve one client at a time.  We recvfrom the client specific socket and send the response back.  When the client closes the connection, we break out of the loop and close the client-specific socket. 

 

  /* accept first incoming connection */

  if ((connfd = accept(sockfd, (struct sockaddr*)&from, &fromlen)) < 0)

    error("ERROR accepting econnection\n");

 

  /* create readable form of IPv6 addr */

  inet_ntop(AF_INET6, &from.sin6_addr, ns, INET6_ADDRSTRLEN); 

  printf("Accepted connection from %s\n", ns);

 

  for (;;)

    {

      /* read incoming message from socket */

      memset(buf, 0, sizeof(buf));

      if ((n = recv(connfd, buf, sizeof(buf), 0)) < 0)

     error("ERROR reading from socket");

     

      /* check if connection was closed */

      if (n == 0)

     break;

     

      /* print on server stdout */

      printf("%s", buf);

     

      /* echo message back to client */

      if (send(connfd, buf, strlen(buf), 0) < 0)

     error("ERROR writing to socket");

    }

 

  close(connfd);

  close(sockfd);

 

Incidently, this echo service is like persistent http.  Http 1.0 opens a connection and make a single request.  Persistent http can make multiple requests to the same server before closing the connection.

 

Notice one more important aspect of the interface.  The recv call passes a pointer to a buffer where the received data is to be placed, along with a length.  The application is providing the buffering for the incoming stream.  Typically, your linux machines has tons of memory, so the application makes this so big that it doesn’t worry about it.  However, as we operate in a memory constrained environment, a single incoming request may get broken over numerous recv’s.

 

Embedded TCP over TinyOS

 

The BSD sockets API and the WINSOCK API have been in place for years.  They are deeply intertwined with the Unix and Windows concurrency model.  On the other hand, the wireless sensor network community have used the Active Message abstraction that appeared in the original version of TinyOS and was adapted from high performance parallel computing.    The research community has never really asked what the communication abstraction should be and how it should be integrated with the low-power, event driven execution model.  Perhaps more importantly, how it deals with the very limited storage capacity on these embedded nodes.

 

In previous labs, we saw that the UDP programming interface fit naturally in the event driven model.  Each datagram signaled an event.  The kernel provided storage for the incoming datagram.  The application used it, but did not retain it.  The application provided the buffer for send.  The same addressing structures were used.  The main difference was that we did not pass around a handle for the socket.  Instead, the relationship between the application and the socket was established at build time.  At run time, we picked the port to bind to.

 

Let’s return to tos/echo and look at the tcp server, EchoTcpC.nc. 

 

The configuration wires the echo server to one of the TCP sockets provided by the kernel.

 

configuration EchoTcpC {}

 

implementation {

  components KernelC;

  components EchoTcpP as EchoP;

 

  EchoP.Boot -> KernelC.Boot;

  EchoP.Tcp  -> KernelC.Tcp0;

}

 

The server is in the implementation module, EchoTcpP.nc, reproduced here.  It uses the Tcp interface which is found in kernel/interface/tcp.nc.  It is the collection of commands and events related to managing a TCP connection.  The commands are the usual operations that the application performs on the socket, bind, send, recv, close.  The events are the state transitions in the TCP state machine, i.e., the handshake between the client and the server.  This involves an additional command, connect.  We are focused here on the server side. Again, in the event driven model it is even simpler than on the Linux platform.

 

#include "Echo.h"

 

module EchoTcpP { 

  uses interface Boot;

  uses interface Tcp;

}

 

implementation {

 

  enum {BUF_SIZE = 128};

  uint8_t m_buf[ BUF_SIZE ]; /* Transmit buffer */

 

  event void Boot.booted() {

    call Tcp.bind( ECHO_PORT, m_buf, sizeof(m_buf) );

  }

 

  event bool Tcp.accept( sockaddr_in6_t *to ) {

    return TRUE;

  }

 

  event void Tcp.connected() { }

 

  event uint16_t Tcp.recv( void *buf, uint16_t len ) {

    /* echo back message */

    if ( call Tcp.send( buf, len ) != SUCCESS ) return 0;

    return len;

}

 

  event void Tcp.acked() {}

 

 

  event void Tcp.closed() {

    /* setup socket for new connection */

    call Tcp.bind( ECHO_PORT, m_buf, sizeof(m_buf) );

  } 

}

 

This simple example illustrate a number of important subtleties.  First, storage allocation.  This follows the TinyOS storage model of static allocation.  Make sure up front that your application has all the storage it will ever need.  Dynamic allocation is nice because you allocate storage as you need it.  It can be advantageous because different activities can share the same storage and operate in a smaller footprint.  One the other hand, if they happen to operate at the same time, the system may hang because there isn’t enough storage available.  This is especially problematic because it can be rare and only at critical times that are very hard to test.  When the crisis happens and the rare events happen at the same time the network crashes.  Dynamic allocation also increases the likelihood of dangling pointers and unanticipated interactions between logically distinct parts of the system or application.  There is nothing that prevents you from writing a dynamic buffer manager, but you don’t have to and the system won’t impose one on you.  Whatever storage you want to use for transmitting byte streams over the socket is what you provide to Tcp.bind.  Here we are making that binding at boot time. 

 

Of course, the TCP server may send zillions of bytes over this socket.  All of them will be sent through this buffer.  The application allocates the buffer, but the kernel manages it use and reuse.  If you send so fast that the buffer becomes full and there is no room for more bytes to be placed into it, the send call will fail with nothing added to the buffer.  More on this later.

 

There is no need to call listen.  Since there isn’t a thread hanging on the socket, the application protocol is much simpler.  Once there is a buffer bound, any connection that comes in will signal an accept event.  Accepting the connection causes the handshake to complete.  When the connection is established, the connected event is signaled.  As segments of the stream come in, tcp.recv events will be signaled with a buffer containing a portion of the received stream.  The recv event fires chunk by chunk as the stream comes in. 

 

Often in embedded application we will want to process the request straight off the air so we store a minimal amount of information about it.  We’ll see this more later. Here, we just hand the received buffer to the send command.  The kernel will transfer the message to the transmit buffer (the one that was bound to the socket) and spool it out over the network.  When the send has been completed, the acked event is signaled.  This is also what the application uses to know that it can retry a previous send that failed for lack of buffer space.  Since we are dealing in the embedded world where we need to be careful about optimistic protocols that work “almost all the time” (especially if you throw enough resources at it), we have built congestion control into the API from the beginning.

 

Finally, when the connection is closed, the corresponding event is signaled.

 

This interface design, and that for UDP/IP, is the soon-to-be published work of Jonathan Hui.  It hasn’t yet had the opportunity to stand the test of time.  But, in my view its beauty and elegance are revealed in its simplicity.  Without explaining the underlying subtleties, it seems obvious how it works.

 

Exercise

 

  • Put tos/echo back on your mote and test it out with TCPclient, rather than NC and Telnet like you did before. 
  • Modify this in ways that are interesting to you, here are some suggestions
    • Add leds so that you can see the sequence of states in the TCP protocol.
    • Add timers and other events so that you can see that you can still handle all those other things while communicating over TCP.
    • Add your sensing and inputs and such so that rather than echoing the request you have an embedded wireless TCP/IP data server.
    • Add a watchdog timer to protect your embedded node from slow or buggy clients.  For example, you might want to make sure that it progresses thought the accept, connect, recv in a limited amount of time.  To test this, you can modify your TCPclient so it connects and doesn’t send any data.  The node should timeout and close the connection.  You might want to put a limit on the whole session.  The client can connect and send/recv for a while, but the time is limited.  This is to prevent a client from arbitrarily draining energy resources on the embedded device.