The Presence Dilemma

Perhaps I'm old school but I have a presence dilemma. I'm referring of course to the presence status of many instant messaging applications that show all my contacts whether I'm currently online, offline or in a state in between.

For me the dilemma is that I feel that there is a difference between being online and having time or being in the mood to engage in a conversation. When I receive an instant message out of the blue and don't have time to respond I sometimes don't feel comfortable to reject the conversation as that might be seen by the other party as rude, especially if he or she is also 'old school'. Also I have to remember to 'text back' once I have time. Also not ideal.

I could of course set my client to 'invisible' but then I would forget later on to switch it back to 'available' when I am or feel reachable again. And no, I don't want to go fully offline with my instant messaging client as sometimes I still want to be reachable for a select audience.

Yes, human interaction is complicated (or perhaps it's just me?) and instant messaging presence is far from reflecting my reachability status.

New York Metro Wireless Coverage – First Stations Connected At Last

Ny-wirelessNext to the London tube, New York's underground transportation system is one of the few places I have noticed over the years that still lack wireless coverage on a broad scale. It looks like things are changing though as I was positively surprised to have coverage in one of New York's metro stations recently. Have a look at the picture on the left to see how the antennas look like. I had to smile a bit because the cables and antennas are significantly bigger than those in the Paris metro for example. Also one antenna in the Paris metro suffices for four networks while New York needs three. But then, everything seems to be a bit bigger in the US.

Anyway, I did a bit of background research to see how far the project has come so far. Looks like there are currently 36 stations covered but there is no talk of the tunnels in between the stations. For details see here. Better than nothing but that's lacking a bit of ambition. I am sure some will point to the fact that the New York metro is old and tunnels are narrow. That hasn't stopped in-tunnel coverage in Paris though where similar conditions can be found.

The article linked above has some interesting details concerning who pays for the deployment and how much it costs. It looks like a separate company called Transit Wireless was established to bundle all technical and financial matters:

Transit Wireless and the carriers are paying 100 percent of the cost of
the project, estimated at up to $200 million
[MS: for around 280 stations], including the cost of NYC
Transit forces that provide flagging, protection and other support
services. The MTA and Transit Wireless evenly split the revenues from
occupancy fees paid by the wireless carriers and other sub-licensees of
the network. Transit Wireless is paying MTA a minimum annual
compensation that will grow to $3.3 million once the full build out of
the network is complete.

When dividing the $200 million by 280 base stations and potentially 4 network operators (leaving the Wi-Fi coverage that is also part of the project out of the equation for the moment) the cost of covering one station is about $180k per network operator. I can imagine that this is more than the costs of setting up a base station site above ground but it still seems to be a reasonable sum to me, especially when compared to the tens of thousands of people that pass through a station daily that generate traffic and revenue.

The post above then also describes some technical details of how coverage is distributed to the underground stations:

Wireless carriers […] co-locate their Base Stations with Transit Wireless’ Optical distribution equipment at a Transit Wireless Base Station Hotel, which is a resilient, fault-tolerant commercial facility with redundant air-conditioning and power.

[…] These Base Stations connect to Transit Wireless’ Radio Interface and Optical Distribution System in the Base Station Hotel. Radio signals are combined, converted to optical signals and distributed on Transit Wireless’ fiber optic cable through ducts under city streets to subway stations where the optical cables connect to multi-band Remote Fiber Nodes.

Remote Fiber Nodes are located on every platform, mezzanine and at various points within public access passageways. Coaxial cable is connected to each Remote Fiber Node and extends signals to strategically located antennas throughout each subway station. Utilizing this approach, low-level radio signals are evenly distributed providing seamless coverage from above ground to underground stations.

According to the article, the deployment includes free Wi-Fi as well which is good news for international travelers that are reluctant to use cellular data services due to high roaming charges.

Book Review: Voice over LTE (VoLTE)

Do you want to learn about VoLTE but you are not sure where to start? If so, here's a tip for you:

Volte-bookLearning how VoLTE works in a reasonable amount of time is not an easy task, there's just so many things to learn. Reading the 3GPP specifications to get up to speed as a first step is probably the last think one should do as there is just too much detail that confuses the uninitiated more than it helps. The get the very basics, my books probably serve the purpose. As they don't focus only on VoLTE however, that might be too little for people who want to focuson VoLTE. This is where Miikka's Poikselkä book on VoLTE comes in that he has written with others such as Harri Holma and Antti Toskala, who are also very well known in the wireless industry.

If you are involved in Voice over LTE, you have probably heard the name Miikka Poikselkä before. He must have been involved in IMS since the beginning as he's published a book on IMS already many years ago and he is also the maintainer of the GSMA specifications on VoLTE and other topics (e.g. GSMA IR.92). In other words, he's in the best position to give a picture of how VoLTE will look like in the real world vs. just a theoretical description.

I've spent a couple of quality time hours so far reading a number of different chapters of the book and found it very informative, learned quite a few new things and got a deeper understanding of how a number of things influence each other along the way. The topic of the book could have easily filled 500 pages but that would have looked a little overwhelming to many. But I am quite glad it wasn't that much and in my opinion the 240 pages of the book strike a very good balance between too much and too little detail.

The book is perhaps not for beginners as many concepts are only quickly introduced without going deeper, which, however, suited me just fine. In other words, you'll do fine if you have some prior knowledge in wireless networks. With my background I found the introduction chapters on deployment strategies and the VoLTE system architecture, that also dig down a bit to the general LTE network architecture to be just at the right level of detail for me to set things into context. This is then followed by the VoLTE functionality chapter that looks at radio access and core network functionalities required for VoLTE, IMS basics on fifty pages, IMS service provisioning on an equal level of detail and finally a short into to the MMTel (Multimedia Telephony) functionality. Afterward, there's a detailed discussion about VoLTE End-to-End signaling that describes IMS registration, voice call establishment and voice call continuity to a circuit switched bearer on 60 pages, again at the right level of detail for me. CS Fallback, although not really part of VoLTE is described as well. Other VoLTE topics discussed are emergency calls, messaging and radio performance.

In other words, a very good book the bring yourself up to speed on VoLTE if you have some prior experience in wireless and a good reference to refresh your memory later on. Very much recommended!

Selfoss – How Good It Feels To Use My Own Webservices From Across The Atlantic

Due to Google Reader's imminent demise I've switched over to my self hosted solution based on the Selfoss RSS aggregator running on a Raspberry Pi in my network at home. I've been using it for around two weeks now and it just works perfectly and has all the features I need. And quite frankly, every time I use it I get a warm and glowing feeling for a number of reasons: First, it's because I very much like that this service runs from my home. Second, I very much like that all my data is stored there and not somewhere in the cloud, prone to prying eyes from a commercial company and half a dozen security services. Also, I like that I'm in control and that all communication is encrypted.

Although quite natural today I get an extra kick out of the fact that I am sitting halfway across the globe and I can still communicate with this small box at home. Sure, I've been using services hosted in my home network while traveling abroad such as my VPN gateway and Owncloud for quite some time now but those are always running in the background, so to speak, with little interaction. Reading news in the web browser on my smartphone delivered by my own server at home, however, is a very direct interaction with something of my own far far way.  This is definitely my cup of tea.

French Regulator Says Interconnect Costs Per Subscriber Are Tens Of Cents Per Month

In many countries there's currently a debate fueled by Internet access providers (IAP) who argue that the ever increasing amount of data flowing through their networks from media streaming platforms will lead to a significant increase in prices for consumers. The way out, as they portray it, is to not only get paid by their subscribers but in addition also by the media streaming platforms. In practice this would mean that Google and Co. would not only have to pay for the Internet access of their data centers but in addition, they would also be required to pay a fee to the thousands and thousands of IAPs around the globe.

Unfortunately, I haven't seen a single of these IAP claims being backup up with concrete data on why monthly user charges are no longer sufficient to improve the local infrastructure in the same way as has been happening for years. Also, there has been no data on how much interconnect charges to other networks at the IAPs border to long distance networks would increase on a per user basis. Thus I was quite thankful when the French Telecoms regulator ARCEP recently published some data on this.

According to this article in the French newspaper Le Monde (Google's translation to English here) ARCEP says that interconnect charges per user are typically in the range of tens of cents per user per month. In other words, compared to the monthly amount users pay for their Internet access, the interconnection charge per user is almost negligible. Also, interconnection charges keep dropping on an annual basis so it's likely that effect will compensate the increasing traffic from streaming platforms.

So the overwhelming part of what users pay per month for their Internet access goes toward paying for the costs of running the local access network up to the interconnect point. This means they pay for the facilities, routers, optical cables to the switching centers and from there for the optical cables to street hubs or the traditional copper cables directly from the switching centers to their homes.

Which of these things become more expensive as data rates increase? The cost for the buildings in which equipment are housed remains the same or even reduces over time due to equipment getting smaller and smaller and more centralized so it doesn't go there. Also it's likely that fiber cables do not have to be replaced due to technology improvements that ensure a continuous increase in the amount of data that can be piped through existing cables. That leaves the routing equipment in central exchanges and in street hubs that have to be continuously upgraded. That's nothing new, however, and has been done in the past, too without the need for increasing prices. Quite the contrary.

One activity that is undeniably costly is laying new fiber in cities to increase data rates to the customer premises. Users who take advantage of this, however, are usually paying a higher monthly fee compared to their previously slower connection. And from what I can tell network operators have become quite cost conscious and only build new fiber access networks if they are reasonably certain they get a return for their investment from the monthly subscriber fee.  In other words, this also can't be a reason behind the claim that increasing data rates will increase prices.

But perhaps I'm missing something that can be backed-up with facts?

My Mobile Data Use Last Month

And just a quick follow up to the previous post on my fixed line data use last month, here are some numbers on my mobile data use last month. According to Android's data monitor I've used 367 MB after 439 MB the month before. The number includes:

  • 135 MB for mobile web browsing (due to not using Opera Mini anymore)
  • 55 MB for Google maps (very handy to check traffic on the way from and to work to decide on using alternative routes on a realtime basis)
  • 33 MB Youtube
  • 27 MB for email
  • 20 MB for streaming podcasts.
  • App downloads accounted for 17 MB (new Opera browser)
  • Calendar and address book synchronization required 10 MB

Not included is the data I use for using my notebook on the way to and from work as I use a different SIM card for that purpose for which I have no records. But even if I included that I am pretty sure I would still be well below the 1 GB throttling threshold I have on my current mobile contract.

From a different point of view, however, my mobile data use pales compared to the 70 GB I transferred over my VDSL line at home last month.

Some Thoughts On My Monthly (Fixed Line) Data Use

In July last year I calculated my how many bits per second I consume on average 24/7. My calculation was based on a use of around 30 GB per month and resulted in 92.6 kbit/s. Since then my use has increased quite a bit. Last month I transferred almost 70 GB over my VDSL line at home (63 GB down, 6 GB up).

In addition to what I used my VDSL line at home a year ago I have started using it to tunnel all my mobile data traffic through my personal VPN server at home. I assume that required a significant part of the 6 GB of data that flowed in the uplink direction (plus the same amount in the downlink direction due to the round trip nature of the application). A couple of additional gigabytes come from my increased web radio use with my new Squeezeplug. But the vast increase comes from the much increased use of video streaming services as my XBMC home server has made it a lot simpler and fun to access content.

Only little of the content was HD, however, and average stream data rates were somewhere around 2 MBit/s. That's around 720 MB of data for every hour of video streaming. If 30 GB of my monthly data came from video streaming, that's the equivalent of around 41.5 hours of video. Sounds like a lot but divided by 30 days, that's around 1.4h of video per day.

Now imagine how much data would have been transferred over my line with 2 teenagers at home and at full HD resolution…

GPRS Network Control Order

In the days and age of LTE it seems to be a bit outdated perhaps to write a technical post on a GPRS topic. But anyway here we go since I looked up the following stuff on GPRS network control order in the 3GPP specs lately:

When GPRS was initially launched, mobile devices performed cell reselections even during data transfer on their own and without any information from the network on the parameters of the target cell. Consequently, there was an outage of several seconds during the cell change. Over time networks adopted a feature referred to "Network Assisted Cell Change" (NACC) which comes in a couple of flavors depending on the network control (NC) order.

From what I can tell, most GPRS and EDGE networks today use Network Control Order 0 (NC0). That means that the UE performs neighboring cell measurements during the data transfer and reselect to a new cell on their ow, i.e. without informing the network. If the network indicates that it supports Cell Change Notification (CCN) (in SIB-13), the UE can ask for support for the cell change by sending a Packet Cell Change Notification request message to the network. The network then supplies the system parameters of the new cell to the UE that can then perform the cell reselection much quicker. That's the NACC mode that's pretty much common now in networks.

But there is more. If the network indicates in SIB-13 that the Network Control Order equals 1 (NC1), the UE has to send measurement reports to the network. Cell reselections are still performed autonomously by the UE when required, again by using NACC if the CCN feature is indicated as active by the network.

And finally, there's Network Control Order 2 (NC2) in which the UE has to send measurement reports to the network and only reslects to another cell when told by the network with a Packet Cell Change Order.

I haven't seen NC1 or NC2 in live networks yet but perhaps some of you have. If so, I'd be happy to hear about it.

For the details have a look in 3GPP TS 44.018 and 44.060.

Owning My Data – A Script To Export Images From Typepad

A number of service provider cloud services have been vanishing recently and have in some cases left me without the opportunity to retrieve the data beforehand. Take the Mobile Internet Access Wiki that I started many years ago as an example, as it was just turned off without any notice. I think there is an old saying that goes along the lines that one is allowed to make an error once but not twice. Following that mantra I started thinking which other service provider hosted cloud services I use and how to backup my data – just in case.

The most important one is Typepad, who hosts my blog since 2005. They do a good job and I pay an annual fee for their services. But that does not necessarily mean I will have access to my data should something go wrong. Typepad offers an option to export all blog posts to a text file and I've been making use of this feature from time to time already. There are also WordPress plugins available to import these entries into a self-managed WordPress installation. I haven't tried the later part so far but the exported text file is structured easily enough for me to believe that importing to WordPress is something that can be done. The major catch, however, is that the export does not include pictures. And I have quite a lot of them. So what can be done?

At first I searched the net for a solution and they range from asking Typepad for the images to Firefox plugins that download all images from a site. But none of them offered a straight forward solution to retrieve the full content of my blog including images to create regular backups. So I had a bit of fun lately to create a Python script that scans the Typepad export file for URLs of images I have uploaded and that ignores links to external images. Piped into a text file, that list can then be used with tools such as wget to automatically download all images. As the script could be useful for others out there as well I've attached it to this post below. Feel free to use and expand as you like and please share it back with the community.

Over the years Typepad has changed the way uploaded images are embedded in blog posts and also the directory structure in which images are saved. I have detected four different ways ranging from simple HTML code to links to further HTML pages and Java Script that generate a popup window with the image. In some cases the Python script just copies the URL straight out of the text file while in other places the URL for the popout HTML is used to construct the filename of the image which can then be converted into a URL to download the file. Yes, it required a bit of fiddling around to get this working. This has resulted into a number of "if/elseif" decisions in the script with a number of string compare/copy/insert/delete functions. In the end the script was giving me close to 900 URLs to images and their thumbnails I have uploaded over the years.

And here's the full procedure of how to backup your Typepad blog and images on a Linux machine. It should work similarly on a windows box but I leave it to someone else to describe how to install Python and to get 'wget' working on such a box:

  • Login to Typepad, go to "Settings – Import/Export" and click on "Export"
  • This will start the download of a text file with all blog entries. Save with a filename of your choice, e.g. blog.txt
  • Use the Python script as follows to get the image URLs out of the export file: './get_links.py blog.txt domainname > image-urls.txt'. The domainname parameter is the name under which the blog is available (e.g. http://myblogname.typepad.com). This information is required so the script can distinguish between links to uploaded images and links to external resources which are excluded from the result.
  • Once done, check the URLs in 'image-urls.txt' and make spot checks with some of your blog posts to get a feeling for whether anything might be missing. The script gets all images from my blog but that doesn't necessarily mean it works equally well on other blogs as there might be options to upload and embed images that I have never used and result in different HTML code in the Typepad export file that are missed by the script.
  • Once you are happy with the content of 'image-urls.txt', use wget to retrieve the images: 'wget -i image-urls.txt'.
  • Once retrieved ensure that all files that were downloaded are actually image files and again perform some spot checks with blog entries.
  • Save the images together with the exported text file for future use.

Should the day ever come when I need this backup some further actions are necessary. Before importing the blog entries into another blog, the existing HTML and JavaScript code for embedded images in the Typepad export files need to be changed. That's more tricky than just to replace URLs because in some cases the filename of the thumbnails of images are different and in other cases indirect links and JavaScript code has to be replaced with HTML code to directly embedd thumbnails and full images into posts. In other words, that's some more Python coding fun.

Download Get_links

The Number of Programming Languages I Have Used In The Past 12 Months

From time to time I need to get some things done that require some form programing because they can't be done with an off the shelf program. When counting the number of scripting and programing languages I have used for various purposes over the last 12 months I was surprised that it were et least 8. Quite an incredible number and it was definitely only possible because by using Internet search engines it's possible to quickly find code samples and background information on programing language syntax and APIs on the net. Books might have helped with the syntax but it would have taken much longer. Also, books would have been of little use to quickly find solutions to the specific problems I had.

And here's the list of programing languages I have used in the past year and for what kind of projects:

  • Python for my Typepad image exporter
  • Visual Basic for my WoaS to MoinMoin Wiki converter
  • Open Office Basic to improve a 7 bit to ASCII converter
  • Some bash programing for cron scripts, piping information to text files for later analysis, etc.
  • Zotero scripting to get footnotes into a special format
  • Java on Android for my network monitoring app and for giving an introduction to Android programming in my latest book
  • Assembly language for the deep dive in malicious code analysis
  • C, again for my deep dive in malicious code analysis

Obviously I haven't become an expert in any of those languages because I only used each language for a specific purpose and for a short time. But while their syntax and APIs are quite different, the basic procedural or object oriented approaches are pretty much the same. So I am glad that during my time at university I learnt the basic principles of programming that I can now apply quickly to new programming languages.