Powered By Blogger

Search This Blog

Tuesday, June 21, 2011

Why having multiple frequency devices that operate in 802.11 a/b/g /n are important in mission critical enterprise computing.

Why having multiple frequency tablets that operate in 802.11 a/b/g /n are important in mission critical enterprise computing.

Most enterprises that rely on wireless networks for mission critical applications employ an mixed frequency approach using both 2.4 and 5GHz  to support different levels of quality of service. This is necessary because of the interference and congestion issues that surround 2.4 GHz only wireless network deployments. Hospital environments and other types of businesses that are carrying mixed use traffic, i.e. public and private network access, need to be able to assure that certain traffic is given a higher priority than others.

As you can imagine a user trying to get important, perhaps lifesaving patient data on the same network frequency that a guest user is downloading a YouTube video of a dancing cat can be problematic if enough bandwidth is not available to both.
Because access to the internet and intranet is important to guests, visiting physicians and other “non-enterprise” customers, it is not unusual to use up the existing 2.4GHz bandwidth, therefore  mission critical data needs to be able to flow over 5GHz frequencies that consumer oriented devices cannot use.
Additionally there are a lot of issues that arise out of the use of 802.11n 40MHz channel width in both 2.4 and 5GHz that can pose a real challenge to VOIP handsets and other devices that do not have the necessary client side capabilities to handle a mixed use a/g/n environment.

An example of this would be the separation  of a single 5Ghz channel for rendering PACS images to and from a server, or setting up access to EMR packages using VLAN’s that are in the dedicated 5Ghz frequency space.
Obviously this is done in order to provide both high QOS and higher bandwidth while allowing standard non mission critical data to flow over 2.4Ghz,

The main issue, in wireless network design, with many consumer devices such as the latest crop of Android and Apple OS products is that they cannot use the higher uncluttered frequencies as they do not have 5GHz wireless cards. This creates the possibility that a single Guest user can disrupt the flow of important, perhaps lifesaving data if these devices are used.

As  a summary, it is important to use mixed frequency enterprise class devices and sensible frequency isolation for critical data and allow generic and consumer type devices to have non-mission critical access to the network in a cohabitated way that does not cause issues. In many environments this can only be achieved by choosing devices that have both 2.4 and 5 GHz capabilities and allocating traffic according to priority within these frequencies. 

Friday, August 27, 2010

Frequency allocation within enterprises

Wireless network engineers are tasked with delivering a complex array of mobile services. These services are increasingly being delivered on a new generation of ultra mobile devices that will provide combinations of data, voice and video applications, many in real-time. Companies must rapidly develop strategies that will allow them to balance this range of new services as well as delivering higher availability of existing services in order to remain competitive. This requires careful planning and management of both the wired infrastructure and the wireless radio frequency spectrum. Unfortunately while frequency spectrum management and planning is critically important, most companies do not have an adopted organizational strategy for this. This becomes essential as more services and layered network deployments are placed into the same amount of limited frequency space.

Develop a strategy for frequency asset protection
The fact that radio frequency spectrum is an asset that needs to be managed and protected is a new concept even to many seasoned networking professionals. This becomes really critical in unlicensed frequency spaces such as the frequencies where wireless networks exist. Devices such as microwave ovens, cordless phones and others coexisted in these frequency spaces long before 802.11 networks were deployed and these devices can cause huge amounts of interference. Additionally there are many more devices that are being deployed into these frequency spaces alongside 802.11 networks with little understanding of the ultimate impact on the overall network performance or long term organizational needs.
Therefore the planning and implementation of a strategy for the protection and management of these assets should be of paramount consideration to any organization that relies on its wireless infrastructure for delivery of mission critical services. The main issue is how to manage and plan for the current and future use of radio devices within a very limited frequency space.

New definition of mobility
The classic mindset for wireless networks and wireless devices has been to consider them as an extension of the existing wired network. Mobility has been defined as the ability to move from place to place and use wireless networks as a point of use technology within these areas of defined wireless utility.
The new paradigm is the implementation of a strategy to take all the services that are currently being delivered with wired networking; including all voice, data and application services and use them while moving around and disconnected totally from the wired infrastructure.
This definition is creating a real strategic as well as tactical dilemma. It also is changing the landscape of wireless networks as the need for bandwidth becomes as important as coverage. It is no longer enough to be “connected” to the network, applications and services are requiring a higher level of service and therefore a larger slice of available resources. Packet assurance across both the wired and wireless networks are viewed as part of an “end to end” deployment that includes both the application server and the end use device.

Conclusion
The development of a wireless strategy that leverages both current and future investments is critical. The ability to include individual implementations into an overall cohesive plan is important to any organization that uses wireless in a mission critical role. Bandwidth considerations as well as connectivity are both essential parts of this strategy. This will become especially important as new services are rolled out that compete for these already limited resources.

Monday, July 12, 2010

Eight types of virtualization

Eight types of virtualization:

Operating System Virtualization - The most prevalent form of virtualization today, virtual operating systems (or virtual machines) are quickly becoming a core component of the IT infrastructure. Generally, this is the form of virtualization end-users are most familiar with. Virtual machines are typically full implementations of standard operating systems, such as Windows Vista or RedHat Enterprise Linux, running simultaneously on the same physical hardware. Virtual Machine Managers (VMMs) manage each virtual machine individually; each OS instance is unaware that 1) it’s virtual and 2) that other virtual operating systems are (or may be) running at the same time. Companies like Microsoft, VMware, Intel, and AMD are leading the way in breaking the physical relationship between an operating system and its native hardware, extending this paradigm into the data center.

Application Server Virtualization - The core concept of application server virtualization is an appliance or service that provides access to many different application services transparently. In a typical deployment, a reverse proxy will host a virtual interface accessible to the end user on the “front end.” On the “back end,” the reverse proxy will load balance a number of different servers and applications such as a web server. One server is presented to the world, hiding the availability of multiple servers behind a reverse proxy appliance. Application Server Virtualization can be applied to any (and all) types of application deployments and architectures, from fronting application logic servers to distributing the load between multiple web server platforms, and even all the way back in the data center to the data and storage tiers with database virtualization.

Application Virtualization - While they may sound very similar, Application Server and Application Virtualization are two completely different concepts. What we now refer to as application virtualization we used to call “thin clients.” The technology is exactly the same. The local laptop provides the CPU and RAM required to run the software, but nothing is installed locally on your own machine. Other types of Application Virtualization include Microsoft Terminal Services and browser-based applications.

Management Virtualization - If you implement separate passwords for your root/administrator accounts between your mail and web servers, and your mail administrators don’t know the password to the web server and vise versa, then you’ve deployed management virtualization in its most basic form. The paradigm can be extended down to segmented administration roles on one platform or box, which is where segmented administration becomes “virtual.” User and group policies in Microsoft Windows XP, 2003, and Vista are an excellent example of virtualized administration rights.

Network Virtualization - A simple example of IP virtualization is a VLAN: a single Ethernet port may support multiple virtual connections from multiple IP addresses and networks, but they are virtually segmented using VLAN tags. Each virtual IP connection over this single physical port is independent and unaware of others’ existence, but the switch is aware of each unique connection and manages each one independently.

Hardware Virtualization - Hardware virtualization is very similar in concept to OS/Platform virtualization, and to some degree is required for OS virtualization to occur. Hardware virtualization breaks up pieces and locations of physical hardware into independent segments and manages those segments as separate, individual components. Although they fall into different classifications, both symmetric and asymmetric multiprocessing are examples of hardware virtualization. In both instances, the process requesting CPU time isn’t aware which processor it’s going to run on; it just requests CPU time from the OS scheduler and the scheduler takes the responsibility of allocating processor time. As far as the process is concerned, it could be spread across any number of CPUs and any part of RAM, so long as it’s able to run unaffected.

Storage Virtualization - Storage virtualization can be broken up into two general classes: block virtualization and file virtualization. Block virtualization is best summed up by Storage Area Network (SAN) and Network Attached Storage (NAS) technologies: distributed storage networks that appear to be single physical devices.
Most file virtualization technologies sit in front of storage networks and keep track of which files and directories reside on which storage devices, maintaining global mappings of file locations.

Service Virtualization - Service virtualization is consolidation of all of the above definitions into one catch-all catchphrase. Service virtualization connects all of the components utilized in delivering an application over the network, and includes the process of making all pieces of an application work together regardless of where those pieces physically reside.

Wednesday, May 5, 2010

IS/IT customer overview/viewpoint:

One of the main issues that IT always considers during the purchase of new technology is how will this affect their current workload and what happens when the rollout is complete? IT groups have to support technology long after most of the project stakeholders and project managers have moved on to other things. As a seasoned IT professional they understand this and therefore they are notoriously skeptical of anything new because, good or bad, they are left with responsibility for the entire lifecycle of the product. One of the things that they learn early is that there are no easy projects, this is life in IT, but bad projects are to be avoided at all costs, additionally they would never willingly agree to the expansion of a project that was already going badly.
There have been few exceptions to this; usually they will only agree to go forward if they can be convinced of two things. The first is that the company providing the technology is as concerned about success as they are and will be long term. It is crucial that they have access to resources who have successfully managed a deployment of the technology that they are considering deploying. Secondly, if they are having an issue they want to understand what the issue is and what the vendor is doing to assure that this will not happen again and how they intend to work with them through the current issue. They would rather have a product that is less than optimal than one that they will have difficulty supporting.
This can be a larger issue than whether the technology has a great and undeniable benefit because if they determine that they cannot support the technology then the benefit is irrelevant to them. Often the benefit is irrelevant to them anyway.

Mitigation strategies- IS/IT concern over the difficulty of deploying new technology is especially sharp during times of challenge, this is never clearer that during initial deployment acceptance or during the process of moving from a small pilot to a larger much broader deployment.

Product familiarization- Obviously the way to overcome discreet product familiarization concerns, form factor, pen use, battery management, etc, is with a combination of product training and with the active involvement of the stakeholders and deployment people within the organization. Companies can usually assist with this through either structured training or ad hoc involvement of the sales person or the regional technical sales guys. This works marginally well as long as there are no external or product issues that cause additional concern.

Product ecosystem- This can be defined as the “terrain in which you are to be deployed” When describing this process it is important to understand that we are usually trying to either displace an existing technology or we are trying to differentiate ourselves from a competitive product. There are a number of issues that come into play with this process they range from minor to overwhelming depending on the severity of the problem. These are critical issues when determining how best to proceed with managing the customer experience.

These issues vary but mostly fall into three broad categories:
• Software issues like application compatibility, input modes, usability.
• Hardware issues like obvious product issues, input devices, form factor acceptance.
• Environmental issues like wireless, SSO or temperature or perhaps ESD related.

Traditionally companies have focused on VAR/Integrators and internal customer groups like IS/IT or perhaps the project champion to work with sales and to achieve successful acceptance. While this is not likely to change and is the most direct path to success it is important that companies also focus on making this easier for IT/IS to accept and therefore more successful.

What is “Mobility” and how is it defined?
Mobility has commonly been defined as the process of walking and standing while interfacing with an application. Therefore for a mobility solution a lightweight device with a different input mechanism from a keyboard is desirable and in fact probably necessary. Commonly it is assumed that if you have an application that you can use with a pen and/or touch, connectivity to the network via a wireless connection of some sort then “mobility” is achieved.

In a very narrow interpretation this would indeed be true, but if this is all that is necessary why is deploying and using the technology such a challenge to the people who are responsible for managing and supporting the devices?
The answer lies in the inherent issues that surround the hardware solution, most of these from an IS/IT perspective have nothing to do with the core end user application, they have to do with the IT systems that will need to be implemented to support the device. You have to make this simple and easy to integrate, A lot of times core product features actually exacerbate this problem rather than solve them. This is because they are not mainstream to the devices that they are already supporting.

How do you overcome these issues?
First you have to fully understand the “terrain in which we are deployed”, this means more than partnering with software vendors and developers who are creating mobility enhanced applications, and it is also more than assisting with the installation and development of high availability wireless networks. Companies need to focus on the entire mobility ecosystem. This will enable you to guide and provide assistance to the people that support the end user devices and guarantee successful long term integration of mobile products into the mainstream IT environment. Then internal staff can be partners and advocates rather than obstacles to overcome.

What do you need to do in order to make this happen?
1. A complete understanding of the entire environment that an IT group has to contend with. This will help us to provide guidance to the support groups and make them successful. This includes imaging, provisioning, authentication, physical and network security.
2. Partnerships with specialists who can provide these solutions. This is more than just a joint selling environment or one off co-development. It is the procurement of the resources that can actually provide guidance, deploy and configure technology in partnership with the end user support environment.
3. Constantly be looking ahead at where the IT/IS infrastructure is evolving and be able to turn this knowledge into solutions that customers can use and benefit from. You can not rely on local resources to figure this out, they may not be able to or may make decisions that are not in either their or ours best interest.

It is important to remember that you are not forcing change you are facilitating the transition to the inevitable result of end user mobility. In other words, you are providing thought leadership and behaving as partners rather than just one more product that the IS/IT group has to deal with. This can be a real differentiator for you and enable you to provide a more meaningful relationship with the groups that will ultimately determine device purchases. I truly believe that in the end it is not the CIO or the end users but an overworked desktop support person who will be able to dictate more technology purchases than not.

Friday, March 5, 2010

Overview
The functional line is being blurred between smart phones, ultra portable devices and PC’s. A couple of years ago smart phones were considered virtually unusable for enterprise applications other than e-mail and messaging and PC’s and laptops were considered unusable for real time mobile communication devices. This is starting to change as the market see’s the emergence of ultra portable devices that are able double as true enterprise productivity devices. This year at CES we are seeing a lot of devices that can be used as enterprise productivity devices but are primarily targeted at mobile communication and entertainment. This provides an opportunity to either build or repurpose these devices and use them for enterprise application presentation.
The opportunity extends both to the device and the application side of the solution set and is the premise for the rest of this document; that premise being that with the new class of enterprise devices opportunities exist for leveraging them against use cases that have traditionally been served by larger sized presentation devices and operating systems.
Examples of enterprise applications that would benefit from use of an ultra-portable device:
EMR applications
MRP applications
Enterprise deployment issues surrounding enterprise application adoption on smaller form factor devices:
Security – this includes security for both the data and the device
Accessibility – meant to indicate access to the IP transport layer and speed of same.
Supportability – Would the device be relatively easy for enterprise IT groups to support and maintain?
Cost – Is the feature / function benefit supported by a good efficiency improvement or revenue model?
Enterprise application usage:
In general if an application is written for use in production enterprises then they are usually meant to be run on a Windows based platform and they are meant to be rendered on a large screen, usually 1024/768 minimum. If they are used on smaller form factor devices then they either don’t render correctly or the screens are so small that they are unusable. This has been the case when trying to use enterprise applications on traditional smart phones and PDA’s. This is coupled with the notoriously buggy nature of these devices.

Issues with ultraportable devices:
Primarily the issue with ultraportable device use in the enterprise is centered around the following two main areas:
1. Input and presentation layer issues – these are varied but mainly center around screen size and input device, touch screen or virtual/real but tiny, keyboards.
2. Lack of ability to run enterprise applications - because of the way that the applications are being rendered, they aren’t viewable or aren’t usable on smaller screens.
3. Limited battery life, ,limited peripherals – batteries run out fast or there is a small amount of peripherals that are usable on these devices.
4. Power and storage space limitations - extremely limited and therefore causes the devices to be heavy and or overheat and or perform poorly.
5. Operating system limitations – They are either too lightweight or are too cumbersome to use on small devices therefore limiting both performance and capability.
Why Google’s Android may be the best answer:
1. The operating system has a base Linux Kernel and can be used easily in conjunction with or for development on many platforms that are already in existence.
a. Unlike
Overview
Wireless networks are evolving rapidly into very complex combinations of different services that are not inherently designed to work together in a seamless manner. Companies must rapidly develop strategies that will allow them to balance this range of services and still deliver a higher quality of service than ever before. This will require careful planning and the development of a long term strategy for managing both the infrastructure and the radio frequency space within buildings and campus environments. Radio Frequency spectrum management is a huge consideration that most companies do not really have a plan for. Development of an RF management and growth plan is essential as more services and layered network deployments are placed into the same amount of limited frequency space.

Develop a strategy for long term radio frequency asset protection
The fact that radio frequency spectrum is an asset that needs to be protected and managed comes as a surprise to even many seasoned network professionals. However, as an example; 802.11 wireless networks must coexist in a frequency space that is unlicensed and free for anyone to use. Devices coexisted in these frequency spaces long before 802.11 networks were deployed; devices such as cordless phones and microwave ovens are classic examples of this, but there are many more devices that are being deployed into networks with very little understanding of the ultimate impact on the overall network performance or long term needs. Therefore the planning and implementation of a strategy for the protection and management of this asset should be of paramount consideration to any organization that relies or is going to rely on it wireless infrastructure for delivery of mission critical services. The main issue is how to manage and plan for the current and future use of radio frequencies that are being used by more devices delivering more services with higher level of requirements and resource needs within the very real resource constraints that are available.

New definition of mobility
The classic mindset for wireless networks and wireless devices has been to consider them as an extension of the existing wired network. Mobility has been defined as the ability to move from place to place and use wireless networks as a point of use technology within these areas of defined wireless utility. This mindset has allowed wired and wireless networks to coexist but with limited wireless functionality. The new definition of mobility is the ability to take all the services that are currently being delivered with wired networking; including all voice, data and application services and use them while moving around and disconnected totally from the wired infrastructure. This definition is creating a real strategic as well as tactical dilemma.

Therefore, the development and implementation of an infrastructure strategy that includes the ability to leverage current and future investment into a cohesive plan that protects investment and assets allocation is imperative to any organization; or at least any organization that wishes to maintain its competitive edge as the industry move towards the inevitability of true mobility.

Monday, September 21, 2009

Wireless Wars

I don't think I have ever seen networking methodologies be so controversial as they are in wireless.
From a common sense point of view the use of non standard vs standards based 802.11 a/b/g/n make even less sense and the only reality that can be gleaned from this is the reality of the effectiveness of good marketing and network engineer ego.
If you couple this with the obvious lack of generalized understanding of what any of the acronyms really mean to device and end user performance you can very easily see how the FUD and confusion around this are allowed to persist and propagate. Bottom line is that you should NEVER sacrifice consistency for speed in production mission critical networks. The only excuse for this is lack of mission critical necessity, after all, if it is not important then use what you want.
But if it is important then it is important to make sure that someone besides the “wireless guy” make the call, also, that it be justified and in keeping with best practices.
I have yet to see anyone agree to use a beta patch on a production server, except under dire circumstance, most often these types of changes are coupled with a high degree of change control and a fair amount of approval.
Yet everyone seems very willing to be far more cavalier with wireless network design and authentication architecture, both of which are far more onerous and insidiously difficult to troubleshoot, diagnose and cause incredible end user dissatisfaction with the WHOLE network.
I would guess that this costs at least a billion dollars a year in lost productivity, wasted time and is a real talent sink because “if the wireless ain't working then I ain't working on anything else... “

KD5YDN