Cloud Computing Patterns, Mechanisms > Basics > Broadband Networks and Internet Architecture > Technical and Business Considerations
Technical and Business Considerations
In traditional, on-premise deployment models, enterprise applications and various IT solutions are commonly hosted on centralized servers and storage devices residing in the organization’s own data center. End-user devices, such as smartphones and laptops, access the data center through the corporate network, which provides uninterrupted Internet connectivity.
TCP/IP facilitates both Internet access and on-premise data exchange over LANs (Figure 1). Although not commonly referred to as a cloud model, this conﬁguration has been implemented numerous times for medium and large on-premise networks.
Figure 1 – The internetworking architecture of a private cloud. The physical IT resources that constitute the cloud are located and managed within the organization.
Organizations using this deployment model can directly access the network trafﬁc to and from the Internet and usually have complete control over and can safeguard their corporate networks using ﬁrewalls and monitoring software. These organizations also assume the responsibility of deploying, operating, and maintaining their IT resources and Internet connectivity.
End-user devices that are connected to the network through the Internet can be granted continuous access to centralized servers and applications in the cloud (Figure 1).
A salient cloud feature that applies to end-user functionality is how centralized IT resources can be accessed using the same network protocols regardless of whether they reside inside or outside of a corporate network. Whether IT resources are on-premise or Internet-based dictates how internal versus external end-users access services, even if the end-users themselves are not concerned with the physical location of cloud-based IT resources (Table 1).
Figure 2 – The internetworking architecture of an Internet-based cloud deployment model. The Internet is the connecting agent between non-proximate cloud consumers, roaming end-users, and the cloud provider’s own network.
|On-Premise IT Resources||Cloud-Based IT Resources|
|Internal end-user devices access corporate IT services through the corporate network||Internal end-user devices access corporate IT services through an Internet connection|
|Internal users access corporate IT services through the corporate Internet connection while roaming in external networks||Internal users access corporate IT services while roaming in external networks through the cloud provider’s Internet connection|
|External users access corporate IT services through the corporate Internet connection||External users access corporate IT services through the cloud provider’s Internet connection|
Table 1 – A comparison of on-premise and cloud-based internetworking.
Cloud providers can easily conﬁgure cloud-based IT resources to be accessible for both external and internal users through an Internet connection (as previously shown in Figure 2). This internetworking architecture beneﬁts internal users that require ubiquitous access to corporate IT solutions, as well as cloud consumers that need to provide Internet-based services to external users. Major cloud providers offer Internet connectivity that is superior to the connectivity of individual organizations, resulting in additional network usage charges as part of their pricing model.
Network Bandwidth and Latency Issues
In addition to being affected by the bandwidth of the data link that connects networks to ISPs, end-to-end bandwidth is determined by the transmission capacity of the shared data links that connect intermediary nodes. ISPs need to use broadband network technology to implement the core network required to guarantee end-to-end connectivity. This type of bandwidth is constantly increasing, as Web acceleration technologies, such as dynamic caching, compression, and pre-fetching, continue to improve end-user connectivity.
Also referred to as time delay, latency is the amount of time it takes a packet to travel from one data node to another. Latency increases with every intermediary node on the data packet’s path. Transmission queues in the network infrastructure can result in heavy load conditions that also increase network latency. Networks are dependent on trafﬁc conditions in shared nodes, making Internet latency highly variable and often unpredictable.
Packet networks with “best effort” quality-of-service (QoS) typically transmit packets on a ﬁrst-come/ﬁrst-serve basis. Data ﬂows that use congested network paths suffer service-level degradation in the form of bandwidth reduction, latency increase, or packet loss when trafﬁc is not prioritized.
The nature of packet switching allows data packets to choose routes dynamically as they travel through the Internet’s network infrastructure. End-to-end QoS can be impacted as a result of this dynamic selecting, since the travel speed of data packets is susceptible to conditions like network congestion and is therefore non-uniform.
IT solutions need to be assessed against business requirements that are affected by network bandwidth and latency, which are inherent to cloud interconnection. Bandwidth is critical for applications that require substantial amounts of data to be transferred to and from the cloud, while latency is critical for applications with a business requirement of swift response times.
Cloud Carrier and Cloud Provider Selection
The service levels of Internet connections between cloud consumers and cloud providers are determined by their ISPs, which are usually different and therefore include multiple ISP networks in their paths. QoS management across multiple ISPs is difﬁcult to achieve in practice, requiring collaboration of the cloud carriers on both sides to ensure that their end-to-end service levels are sufﬁcient for business requirements.
Cloud consumers and cloud providers may need to use multiple cloud carriers in order to achieve the necessary level of connectivity and reliability for their cloud applications, resulting in additional costs. Cloud adoption can therefore be easier for applications with more relaxed latency and bandwidth requirements