Telecommunications, 5G and Shannon Limit

Telecommunication – the transmission of signals over a distance for the purpose of communication. The engineering aspect of telecommunications focuses on the transmission of signals or messages between a sender and a receiver, irrespective of the semantic meaning of the message. The Open Systems Interconnection model (OSI model) is a conceptual model that characterises and standardises the communication functions of a telecommunication or computing system without regard to its underlying internal structure and technology.

Such an abstraction allows the functionality provided by layer-N to be defined in terms of Layer-(N-1). Communication protocols enable an entity in one host to interact with a corresponding entity at the same layer in another host. A communication protocol is a system of rules that allow two or more entities of a communications system to transmit information

Information theory and Fundamental Limits: A revolution in wireless communication began in the first decade of the 20th century with the pioneering developments in radio communications by Guglielmo Marconi, including Charles Wheatstone and Samuel Morse (inventors of the telegraph), Alexander Graham Bell (inventor of the telephone), Edwin Armstrong and Lee de Forest (inventors of radio) and many others. Among these inventor stands out Claude Shannon who defined “information”, introduced a simple abstraction of human communication (channel) and showed that “Data” can only be transmitted so within constraints of time and quantity. (the constraints are a dependent on the medium (copper wire, fibre optics, electromagnetic, etc..) ). He came up with a mathematical basis of communication that gave the first systematic framework in which to optimally design telephone systems. The main questions motivating this were how to design telephone systems to carry the maximum amount of information and how to correct for distortions on the lines.

He used Boolean algebra–in which problems are solved by manipulating two symbols, 1 and 0–to establish the theoretical underpinnings of digital circuits that evolved into modern switching theory. All communication lines today are measured in bits per second, which is also used in computer storage needed for pictures, voice streams and other data. He came to be known as the “Father of Information theory”

5G: The 5 generations of wireless technology are as below. 5G makes a big statement with peak theoretical data rates of 20 Gbps for downlink and 10 Gbps for uplink.

  • 1G: Voice only, analog cellular phones Max speed: 2.4 Kbps
  • 2G: Digital phone calls, text messaging and basic data services Max speed: 1 Mbps
  • 3G: Integrated voice, messaging mobile internet, first broadband data for an improved internet experience and use of applications Max speed: 2 Mbps
  • 4G: Voice, messaging, high speed internet and high capacity mobile multimedia, faster mobile broadband Max speed: 1 Gbps
  • 5G: A revolution in user experience, speeds, technology, connecting trillions of devices in the IoT, supporting smart homes, smart buildings and smart cities Max speed: 10 Gbps

However these are numbers as defined in the specifications. The perceived reality w.r.t speeds are lower and a more useful metric defined by the International Telecommunications Union (ITU) for the IMT-2020 standard (basically the 5G standard) is user experience data rate, which is the data rate experienced by users in at least 95% of the locations where the network is deployed for at least 95% of the time.

As cellular communication has progressed in the last two decades, we’ve rapidly approached the theoretical limits for wireless data transmission set by Shannon’s Law. The equation explores the relationship between the total data throughput capacity of a system, in terms of spectrum (radio frequencies), number of antennas and signal to noise ratio on the communication channel.

I have described a few components of 5G in this article. The physical properties of higher frequencies (millimeter Waves: 30 – 300 GHz) conjures up more Space, allowing more data to move across at a given instant. It opens up another decade or 2 of constantly pushing the limits of Shannon’s law and taking us into a new era of technology and experiences. I will try and explore this equation in the context of #5G in my subsequent posts.

5G shots

5G will be the most transformative tech of our lifetime. I will try and explain a few terms to demystify the landscape.

Millimeter Wave: An entirely new section of spectrum never used for mobile services. Millimeter waves are broadcast at frequencies between 30 and 300 gigahertz, compared to the bands below 6 GHz . They are called millimeter waves because they vary in length from 1 to 10 mm, compared to the radio waves that serve today’s smartphones, which measure tens of centimeters in length

Carrier Aggregation: Uses multiple frequency bands together and leverages them together. This means a user can simultaneously be connected (via device) with both 700 Mhz and 1900 MHz frequency of the spectrum and hence can better use all the network resources. Enables increased data speeds (more data to download or upload at a given instant) because there is more space (on the spectrum) for traffic to move around.

256 QAM and 4×4 MIMO: Quadrature amplitude modulation (QAM) is the name of a family of digital modulation methods and a related family of analog modulation methods widely used in modern telecommunications to transmit information. With this approach the carrier is able to pack a lot of more information in the same space without loosing on quality. This leads to speed and efficiency. Combine this with the 4×4 MIMO (multiple inputs multiple outputs) – which provides the ability layer the network (stacking up in another dimension) and also doubles the smart phone antennas – the amount of data that can be carried over at any instant along with the speed multiples many fold.

Full Duplex: With full duplex, a transceiver (within a cellphone) will be able to transmit and receive data at the same time while on the same frequency, doubling the capacity of wireless networks at their most fundamental physical layer

Small Cells: Portable miniature base stations that use very little power to operate and can be placed every 250 meters or so throughout cities. To prevent signals from being dropped, carriers may blanket a city with thousands of small cell stations to form a dense network that acts like a relay team, handing off signals like a baton and routing data to users at any location.

The features and benefits of 5G will evolve over time, with transformative changes coming over the next several years as standards for eMBB, Critical IoT, and Massive IoT use cases are developed by 3GPP. Use cases for 5G fall into three broad categories: enhanced mobile broadband, massive IoT, and critical IoT.

  • Enhanced broadband will provide higher capacity and faster speeds for many of today’s common use cases. This includes fixed wireless access, video surveillance, enhanced experiences in brick-and-mortar retail locations, mobile phones and others.
  • Massive IoT will support the scaling of machine-type communications. This solution will support health monitoring, wearable communication, fleet/asset management, inventory optimization, smart home, health monitoring, wearable communications and more.
  • Critical IoT will enable new use cases that require ultra-reliable, low-latency communications. It is a geographically-targeted solution for smart factories, smart grids, telemedicine, traffic management, remote and autonomous drones and robotics, mobile bio-connectivity, interconnected transport, autonomous vehicles and more.

Quest for best-in-class dev tools and platforms

Best in class digital workforce was one our goals in 2019 and we made several strides in that direction. Developer productivity is one measure of this goal and in that spirit – of empowering the developers with best-in-class tools and productivity enhancers – I got a chance to evaluate Sourcegraph. In this article I want to share my journey of this evaluation and my experience, primarily from a developer’s point of view.

Sourcegraph is a code search, and web based code intelligence tool for developers. It offers all of its features at scale in a large “space”:

  • Public service: [24 programming languages] x [All opensource repos ] x [All repo hosts = Gitlab, Github, Bitbucket, AWS Code Commit] x [all branches]
  • Private service: [24 programming languages] x [All private repos on self hosted server] x [Gitlab, Github, Bitbucket] x [all branches]

My view point into the evaluation was along the 3 pillars of SourceGraph: Search, Review and Automation. Describing the rich feature set of SourceGraph is an exercise in itself, instead I will try and make a case for why this tool stands out and how it improves the productivity of a developer.

Code Search

Search: A developer while creating code, will have a need to look at a definition of method. Most of the times this definition may exist in the IDE (on laptop) and the search becomes a matter of remembering the name of the method and reach it using IDE’s search functionality. However it becomes trickier if one does not member the exact phrase or does not know what to look for and what if that snippet of code is not available on the local dev environment. One has to clone multiple repos in order navigate the definition, find references and complete the review. This is where I see Sourcegraph adds value where in, the developer by using advanced features like regex (that allows one to search a subset of the languages like Python, Go, Java), Symbol search (that enables searches only on variables and function names) and comby search (a more powerful search than regex that enables finding balanced parenthesis) is now empowered to perform (not limited to) the following actions, right in the browser:

  • How an API should be invoked
  • What is the impact of modifying an existing API
  • Find variables starting with a specific prefix
  • Find a function call and replace the argument (via comby search)
  • Refactoring a monolith into microservice
  • Learning new ways to write code from both internal private repos and opensource repos
  • Learning enterprise standard ways to read tokens securely, pack PoP tokens in microservices and client side code
  • Search through 1000s of private repos with GBs of data, where it is not always possible/efficient to clone them locally
  • Where are the environment configs declared
  • Find specific toggles across the source
  • How to implement a given algorithm
  • Use GraphQL APIs (code-as-data) to power internal telemetry around source code metadata
  • What recently changed in the code about (feature, page, journey etc…) that broke it? One can search commit diffs and commit messages
  • Use predefined searches curated for you or your team/organization. These can also be used to send you alerts/notifications when developers add or change calls to an API you own
  • Search for instances of secret_key in the source
repo:^github\.com/gruntwork-io/ file:.*\.tf$ \s*secret_key\s*=\s*".+"

SourceGraph enables all the above my offering search the spans over multiple code hosts (Github, Gitlab, Bitbucket and many other), multiple repos of each host, across all branches of a repo and across both Open Source repos and the privately held Enterprise Repos – it should be noted that searching across both open source and private enterprise repos is currently not possible with the same single search. It has to be 2 different search queries. Also searching code is not the same as searching text where if you search for “http request” on github, the results end up with a bunch of noise that includes “httprequest” and “http_request” as well. All of these contribute to shipping code faster.

Code Review

Review: With the advances in source code hosting tools like github and gitlab and the like, code reviews have become less geekier where the reviewer receives a notification of the review, opens the review in a browser which renders the diff, the reviewer comments, approves/denies the request and moves forward with the workflow, all inside a browser. Everyone realizes the importance of this phase/step which offers the opportunity to catch non-mechanistic patterns, avoid costly mistakes (logical/semantic errors, assumptions etc…) and is also a medium of knowledge transfer. SourceGraph goes a couple of steps further and empowers the reviewer with source code navigation right from the browser where you can extend and decorate code views using Sourcegraph extensions

  • Navigate source code as if in an IDE. Hover mouse onto a method name to “Go to Definition”. This is a huge time saver and makes the process efficient (through their indexing algorithm)
  • In a similar vein, in addition to seeing the definition of method, the reviewer might want to check who else is using/referencing it in order to assess the impact of the change. SG offers “Find references” in all repos that are indexed on the server.
    • Note SG is self-hosted (tm/sourcegrapheval) in order to index all enterprise private repos, so the code never leaves the network. However for searching non-enterprise opensource code, there is a publicly available cloud service at sourcegraph.com. I would have loved the option where the private server falls back on the public server seamlessly, but may be SG team will make that available in the future releases.
  • Corroborates the change with test coverage numbers and runtime traces. While the above 2 points (“Go to Definition” and “Find References”) constitute 80% of the use cases, the reviewer can feel more confident of the change when the review also offers test coverage numbers. SG offers that and goes one step further by making available trace performance numbers from runtime (via supported services that have to be enabled)

Workflow Automation

Automation: I discussed 2 pillars so far: Search and Review. Now think about performing those acts at enterprise scale with appropriate roles and visibility via workflows. SG offers a beta feature called Automation which does exactly that: remove legacy code, fix critical security issues, and pay down tech debt. Ability to create campaigns across thousands of repositories and code owners. Sourcegraph automatically creates and updates all of the branches and pull requests, and you can track progress and activity in one place. This is huge. Imagine that scale! This capability enables the following use cases:

  • Remove deprecated code: Monitor for legacy libraries, and coordinate upgrading all the affected repos iteratively
  • Triage critical issues
  • Dependency updates
  • Enable distributed adoption
  • Reduce the cost and complexity of sunsetting legacy applications

Essentially, it offers the sum total of all code intelligence (enterprise wide) in a split second via its search! Upping the ante for developer experience. Here is a quick featureset comparison with other similar solutions: https://about.sourcegraph.com/workflow/#summary-feature-comparison-chart

Proof of Concept

Installation was super simple where the distribution was made available as a docker container that I deployed on one of our EC2s. I was also quickly able to integrate with Okta (our AuthN service) pretty seamlessly and my colleagues from whole enterprise were able to play around and tried some use cases. Once the adoption improves, I plan to deploy Sourcegraph on a Kubernetes cluster.

While integrating with Okta, I noticed a small issue with SAML handshake and when I mentioned it to the Sourcegraph team, they hopped on a call, helped debug it, made a quick change in the product and provided a release candidate with the hotfix which I was able to upgrade to in a matter of few mins w/o losing any of the configurations done, thus far. Loved the experience!

Whats Next

developer platform is the one place where Developers and DevOps teams go to answer questions about code and systems. It ties together information from many tools, from repositories on your code host to dependency relationships among your projects and application runtime information.

-Sourcegraph

I am encouraged to see how it fits into T-Mobile’s strategy of creating an environment that facilities rapid experimentation and enabling faster change with a goal of empowering the developers and devops to build better software faster, safer and healthier.

In that spirit, T-Mobile made crucial moves in the last few years, towards optimizing the Continuous Delivery Platform. From custom CICD processes on prem, to industry standard processes on hosted solutions on prem, to cloudbased gitlab, a onestop shop for source code management, devops lifecyle along with devsecops. In Nov 2019, Sourcegraph and Gitlab announced (relevant MR) native integration offerring a big improvement to the developer UX. Although the Sourcegraph browser extension will continue work, the integration with GitLab simply means that developers unwilling to install browser extensions are able to enjoy valuable features seamlessly integrated with their GitLab workflow.

Note: The enterprise private repositories still have to be hosted on a private sourcegraph server, just that the experience is now delivered natively via gitlab workflows and not via browser plugins.

Given my positive experience and the immense potential, I plan to recommend Sourcegraph to our procurement team, in the hope of making it a reality to all Devs, Devops and Devsecops teams at T-Mobile.

Blog! Why bother?

Its been on my mind – for a few months now – to revive my interest in blogging and up my ante around this act. Given the hiatus, topic selection proved harder than I first thought. A few topics sifted through my mind as possible candidates, but my train of thought wandered through multiple paths, viewpoints, perspectives, all of which made me to question, “Why am I doing this”.

I wanted to take a step back and reflect, why I should be blogging and after spending the time and effort whether it would still have the same impact, as I thought/assumed it would. Times have changed, information consumption patterns have evolved, is traditional blogging one of the authentic distribution channels? What do I want to achieve? A few answers came

  • Over the years, I have acquired skills through my own experiences and I had like them to find expression.
  • I had like to argue a viewpoint and/or debate the pros and cons of multiple perspectives of a given story.
  • I want to reinforce my own knowledge on topics and figured that authentic writing is one of the ways to showcase expertise and in the process build my brand.
  • I want to build a community of like minded people, debate and bounce ideas off of them, kindred spirits having a lot in common.
  • I want to keep the experience educational and fun where my articles are a measure of my progress and concrete enough so that it helps others who are on a similar journey.
  • I had like to publish articles on topics like Digital Transformation, and Programming to start with.

When it comes to delivering an idea, I still believe a written blog is a good medium (compared to audio and video) because that text is searchable, less maintenance and less noise. However there are a plethora of platforms like medium, wordpress, linkedin, dev.to, and many others related to one’s niche skillset. I am not entirely clear on how to manage this other than to publish the same content on all relevant platforms.

It also feels unwise to ignore platforms like youtube, twitch, soundcloud, podcasts, etc… so I would experiment them based on the context of the content. Experimentation is the key and I should find the right balance in due course.

So whats next? I want to pick one topic and blog to the end of it. Narrowing down on one, is not something that comes easily to me, given my “generalist” nature (a blog topic by itself for another day). It may be tactful and practical to identify parts of my daily work and blog about my journey in accomplishing them. So stay tuned!

Startup Weekend Seattle Space

Our team “Star Trial” won the 2nd prize for #pitching and creating #businessplan for matching #aerospace companies with testing facilities that offer services to certify function of a gadget in #space. Built the team with members who were interested in similar cause and had great fun executing it. More importantly learned a lot. Great guidance from #johnsechrest and wonderfully organized by Michael Doyle Sean McClinton Stan Shull and graced by industry stalwarts like #DonWeidner #lisarich #russellhannigan #ronfaith and #aaronbird. Thanks to our #mentors @vladimir.baranov.newyork @joseph.gruber

At the game 


Could not have asked for better day for the game. T-Mobile sponsored the event for its employees and it was a good networking event.  We were at the apigee of the stadium and the vantage point was amazing. Wish I had gotten a better parking spot but other than that could not have asked for a better day with Seattle Mariners winning.