The Role of WiMAX in Enabling Wi-Fi Data Offload Networks

on March 10, 2010   |   3 comments


Between the continued explosion of smartphone growth and a quick review of the news coming out of the Mobile World Congress (MWC) event in Barcelona, it is clear to see that the topic of offloading data from 3G networks continues to be a huge topic. At MWC, companies including Stoke, Tellabs, Juniper and more were all abuzz about the huge market potential for “data offload”, referring to the opportunity to help operators offload data traffic from their congested 3G networks to W-Fi networks.

Due to the more than 40 million iPhones sold to date and the estimated to 5.4 million per  quarter Android devices that are being shipped – tons of new data-intensive applications are being put in the hands of millions of cellular users each day.  As a result, data traffic on today’s 3G networks has spiked drastically, to the point where data traffic is outpacing new revenue for the carriers. While operators’ revenues have increased by 50 to 100 percent due to data revenue, data services have increased 3G traffic by anywhere from 500 to 1000 percent. This creates a very serious problem – 3G networks that are completely overloaded (especially in metro areas) that degrade the experience of the millions of smarthphone users, and carriers that are not recouping anywhere near as much in revenue as they are being forced to provide in network capacity.

Due to the degradation of 3G data services in overloaded areas, many people are already turning to the built-in Wi-Fi capabilities of all of these smartphones as the only way to provide acceptable browsing, video-streaming, and other data-intensive application experiences. AT&T has said that the number of connections made to its 20,000 plus domestic hotspots in the fourth quarter of 2009 increased to 35.3 million — up 10 million connections from the third quarter of 2009. Fourth quarter connections also surpassed the number of connections made in the entire first half of 2009. So what does this all mean?

The Urgent Need for 3G Data Offload

Simply put, the increase in smartphone usage and the subsequent explosion of data traffic on 3G networks has caused an immediate need for carriers to offload the data traffic from the 3G network, so that both voice and data services perform optimally.  And due to the built-in Wi-Fi capabilities of the millions of smartphones on the market, users are often turning to Wi-Fi hotspot services to fill in the gaps when the 3G network can’t cut it.

And as we’ve seen from AT&T’s purchase of hotspot provider Wayport and Verizon’s partnership with hotspot provider Boingo Wireless, it seems that carriers are embracing the need to utilize Wi-Fi as a way to lessen the strain on their 3G networks. And while the carriers will still proceed in their plans to roll out 4G (to stay ahead of the competition), offloading data traffic from LTE networks will be an issue as well, so the 3G data offload networks put in place now will continue to serve as an ongoing solution for ensuring optimal voice and data communications moving forward.

Though AT&T and Verizon’s partnerships with Wayport and Boingo (respectively) are a good first step, that alone will not be enough to solve the 3G offload problem. Wayport and Boingo’s hotspot networks do not provide ubiquitous Wi-Fi coverage in even the largest metro areas, meaning that in order to provide effective 3G offload, the carriers must hope that customers stay within the designated hotspots (which, most do not do).

The Role of WiMAX in Extending Wi-Fi Offload Networks

So if carriers must rely on Wi-Fi to offload data from the 3G network in order to provide more reliable service to their users, but those users do not always stay within bounds of the current network of small, distributed hotspots – what are they to do? One solution that many are pursuing is to utilize fixed WiMAX and point-to-multipoint solutions to provide the backhaul for large-scale Wi-Fi networks. Unlike the ill-fated “municipal Wi-Fi” market of years past that pitched Wi-Fi mesh technology as the ideal solution for these large-scale networks, many operators and WISPs have learned that a hybrid approach of utilizing high-performance point-to-multipoint technology for backhaul and Wi-Fi for access is a much more feasible solution.

Of course, both Wi-Fi mesh and point-to-multipoint technologies have their own merits. So let’s take a look at the primary benefits and differences between Wi-Fi mesh and the hybrid Point-to-Multipoint/Wi-Fi approach:

Wi-Fi Mesh

Wireless mesh technology refers to modified Wi-Fi radios that connect to each other in a daisy-chained or “multi-hop” fashion, enabling traffic to be passed from radio to radio en route to its final destination. Wireless mesh networks create a large network that blanket a large area, and relays information from node to node en route to its final destination. Not all mesh technologies are alike, but in general, the same claims are made by all when talking about the fundamentals of wireless mesh – and this includes the  self-configuring, self-healing and built-in redundancy/reliability of the multi-hop environment.

However, this topology can often prove to be problematic, especially for latency-sensitive applications. Latency, unpredictable/undesirable behavior, dynamic bottlenecks and the subsequent capacity shortages have been all too common when trying to support voice and video applications on wireless mesh. Now, that is not to say that wireless mesh is incapable of delivering  these applications – but several misconceptions and unrealistic expectations regarding the performance and ease of use have caused a great deal of confusion around wireless mesh.

Given the fact that wireless mesh radios (also called nodes) are designed to operate as both transmitters and repeater, these networks can be a convenient means of extending connectivity around obstacles that might otherwise prevent a direct line-of-sight link. This is referred to as providing non-line-of-sight (NLOS) operation. Given the use of omni-directional antennas on wireless mesh networks, however, overall system gain and increased reception of interference causes can often cause mesh units to have high outage and require more cycles to adapt to the environment. So, while NLOS capabilities can be recognized in ideal situations (environments with no other – or very little – radio noise or interference, which are rare), often times wireless mesh deployments can fall victim to the interference of the environments they are deployed in.

As appealing as the term “self-configuring” may be, any mission critical network should be carefully designed to operate to a specific availability and within parameters that are understood and agreed upon by all parties. With that said, dynamic and adaptive technologies are not specific, and regardless of which topology is used a good rule of thumb is to minimize the variables and have a very comprehensive understanding and documentation for how each device in a network is configured.  The technology chosen should also provide enough management over these functions to keep the systems under control.

The “self-healing” trait touted as a strength in wireless mesh systems is also important to review. You must ask yourself – why is the system in need of healing in the first place?  A properly designed and installed wireless network with adequate signal should perform to 99.999 percent availability at the desired modulation required to support large amounts of data-centric applications.

So, while wireless mesh technologies certainly have their uses, in situations where high reliability and performance are necessary – like in carrier networks – wireless mesh can present a few problems.

WiMAX/Point-to-Multipoint  as Backhaul for Wi-Fi

Point-to-Multipoint systems – especially when paired with Wi-Fi for the access piece of the offload network – are often a cost-effective means of obtaining the advantages of wireless mesh without the liabilities. Point-to-Multipoint technologies implement central base station units (BSUs) that then connect to multiple subscriber units (SUs). This enables the deployment of separate, lower-cost SUs throughout the network, where each SU then backhauls the traffic from multiple outdoor Wi-Fi access points (APs) directly from the APs back to the BSU. The BSU acts as the aggregation point for the traffic from all SUs.

Unlike wireless mesh deployments, which can introduce detrimental latency, performance and reliability issues based on the non-direct multi-hop nature of the technology, Point-to-Multipoint networks provide a series of direct connections from the many SUs back to the central BSU. This provides a balance between the dedicated connectivity needed to ensure the quality and performance needed for mission critical data offload networks and the cost-effectiveness of a distributed network.

The best Point-to-Multipoint networks will utilize a polling algorithm (like WiMAX) to provide an efficient and effective means of distributing bandwidth amongst the end points/SUs fairly and in a controlled manner. This helps to provide quality of service (QoS) in the wireless network, and to ensure that each node receives the bandwidth necessary to deliver a constant, reliable backhaul link for the Wi-Fi access points connected to it.

Though traditionally a technology that required line of sight connectivity between SUs and the BSU, there are recent advances that have enabled NLOS functionality in some Point-to-Multipoint systems, which provides even greater ease of use and configuration. For the best performance, though, line of sight operation among Point-to-Multipoint links still yields the greatest return.

If it Failed Before, Why Would it Succeed This Time?

Unfortunately, some people have tried to link the rise of “data offload” networks to the ill-fated “municipal Wi-Fi” movement of years past. The “municipal Wi-Fi” market aimed to blanket entire cities with Wi-Fi networks, giving everyone ubiquitous wireless broadband access. Unfortunately, the mantra of the movement was “Free Wi-Fi for All”, and expected cities to pay for the Wi-Fi networks as a service to their residents (hence the muni Wi-Fi moniker). That mantra proved to have to very opposite effects: first, it made citywide Wi-Fi a wildly popular idea that the media latched onto immediately, thus creating a large market of vendors competing to secure free Wi-Fi deals; and secondly, it caused an inherent lack of a significant revenue stream for most of these networks, which meant that cities could not afford to deploy them due to the lack of ROI (or were forced to shut them down if they did manage to deploy them).

So if municipal Wi-Fi failed, why would this proposed second generation of citywide Wi-Fi networks succeed?

It is important at this juncture to realize that the proposition of citywide Wi-Fi networks to offload 3G data traffic from carrier networks and the proposition of “municipal Wi-Fi” to provide free wireless connectivity to everyone are fundamentally opposite at the core. There are three main differences between the two models: the problem they aimed to solve, the business model, and the return.

3G Data Offload Networks vs. First-Gen “Municipal Wireless” Networks

As you can see, there is a significant difference in the 3G offload model that the market is currently proposing as opposed to the municipal Wi-Fi networks of old. With the 3G offload problem, there is a real problem that could cost the carriers billions if not solved. There is an actual business model made up of existing paid subscribers that can subsidize the cost of the 3G offload Wi-Fi network buildouts. And there is a clear ROI – preventing the possible loss of millions in revenue if unhappy customers were to switch carriers, and the deference of the need to spend billions in 4G network buildouts in order to keep those customers happy.

Learning from the Past, Moving Forward

Three years ago, there simply was not anywhere near the volume of Wi-Fi enabled smartphones that there are today. So when the municipal Wi-Fi market exploded, there wasn’t anywhere near the demand for ubiquitous Wi-Fi that there is today. Not only do millions more end users have the capability to access data networks everywhere they go, but they ARE accessing data everywhere they go, causing an immediate need for the carriers to respond in order to sustain user satisfaction.

The 3G data offload movement is a real (revenue affecting) problem for the carriers, one that is in desperate need of a solution. And what’s more, it is a movement funded by carriers to ensure network performance for their paying customers, as opposed to a free model with no significant revenue to sustain itself. And even as carriers continue to plan and roll out their 4G networks, with the exponential growth in mobile data usage that is expected as smartphones continue to proliferate, the Wi-Fi networks deployed as 3G data offload networks today will not only utilize 4G point-to-multipoint technology as the backhaul for those Wi-Fi offload networks, but they will continue to serve as 4G data offload networks in the future.

{ 3 comments… read them below or add one }

nik m Ismail March 10, 2010 at 8:41 pm

what device you can use to have both 3G and wimax on mobility? is it availavle now?

Alpha Omega Wireless April 24, 2010 at 8:57 pm

(Posted also on Robb's Blog – Please follow it. He has a lot of great information).
As a wireless integrator I agree to a point. As an avid iPhone (3G) mobile user I again agree somewhat. I do enjoy using the built in Wi-Fi on my mobile device. I use it all the time at home and at the office. Sometimes even at airports. I agree that in many places (depending on your location) 3G networks are heavily congested. 3G is though expanding and provides decent bandwidth (soon to get better as 4G gets deployed – someday).

The problems with the idea of large scale Wi-Fi deployments are who is going to pay for it and who would maintain it? The telco's have no drive to spend huge money on another means of backhaul when they are already spending a lot to build out their current networks. 3G / 4G is their core competency and business model, not Wi-Fi. If they were to try and deploy a large scale Wi-Fi network they would still need to backhaul it to one of their gateways (most likely a cellular tower location). They would have to add large scale back-end monitoring systems and support centers. Other issues could be: interference, site acquisition, security, liability, etc.

With the push for faster mobile networks LTE / WiMax a lot of money and effort is being spent. Carriers moving to a WiMax model makes sense, but Wi-Fi I don't feel would ever be a reality. It would still be cheaper and faster for the carriers to move to 4G technologies rather than developing an augmenting Wi-Fi network.

Wi-Fi is a great option on many mobile devices, but someone has to provide the infrastructure and maintain it. No one will build it if it doesn't add revenue (no matter how good the technology is). I would stick with WiMax!

Alpha Omega Wireless April 25, 2010 at 12:57 am

(Posted also on Robb's Blog – Please follow it. He has a lot of great information).
As a wireless integrator I agree to a point. As an avid iPhone (3G) mobile user I again agree somewhat. I do enjoy using the built in Wi-Fi on my mobile device. I use it all the time at home and at the office. Sometimes even at airports. I agree that in many places (depending on your location) 3G networks are heavily congested. 3G is though expanding and provides decent bandwidth (soon to get better as 4G gets deployed – someday).

The problems with the idea of large scale Wi-Fi deployments are who is going to pay for it and who would maintain it? The telco's have no drive to spend huge money on another means of backhaul when they are already spending a lot to build out their current networks. 3G / 4G is their core competency and business model, not Wi-Fi. If they were to try and deploy a large scale Wi-Fi network they would still need to backhaul it to one of their gateways (most likely a cellular tower location). They would have to add large scale back-end monitoring systems and support centers. Other issues could be: interference, site acquisition, security, liability, etc.

With the push for faster mobile networks LTE / WiMax a lot of money and effort is being spent. Carriers moving to a WiMax model makes sense, but Wi-Fi I don't feel would ever be a reality. It would still be cheaper and faster for the carriers to move to 4G technologies rather than developing an augmenting Wi-Fi network.

Wi-Fi is a great option on many mobile devices, but someone has to provide the infrastructure and maintain it. No one will build it if it doesn't add revenue (no matter how good the technology is). I would stick with WiMax!

Leave a Comment

Previous post:

Next post: