As early as 2009, the concept of SDN (Software Defined Network) appeared, but recently it began to attract people's attention, mainly because Google jumped out and declared that all the networks in its internal data center began to use OpenFlow for control, which pushed OpenFlow from what was originally just academic to the commercial field instantly. The second exciting news is that VMWare acquired Nicira, a network virtualization company, for10.26 billion dollars.
SDN is just an idea. In the final analysis, it is to realize a programmable network. The originally closed network device control plane is completely taken out of the "box" and managed by a centralized controller. It is completely open, and you can define any mechanism and protocol you want to realize. For example, if you don't like the built-in TCP protocol of the switch/router, you can modify or even remove it through programming and completely replace it with another control protocol. It is precisely because of this openness that the development space of the network becomes infinite. In other words, only you can't think of it, and you can't do it without it.
Then why is SDN associated with NV? In fact, there is no causal relationship between the two. SDN is not designed to realize network virtualization, but it is precisely because of the advanced SDN architecture that the government has realized the task of network virtualization. Many people (including myself) even thought SDN was NV when they first came into contact with it, but in fact SDN has a much broader vision. In mathematical terms, it means "NV is included in SDN, and SDN includes NV".
Let's look at NV again. Why is NV so popular? In the final analysis, it is because of the rise of cloud computing. Server/storage virtualization provides infrastructure support for cloud computing, and there are mature products and solutions, but you will find a problem. Even so, the migration of virtual machines is still not flexible enough. For example, VMWare vMotion can realize online migration of virtual machines, and EMC VPLEX can realize active and passive sites. But the virtual machine's network (address, policy, security, VLAN, ACL, etc. ) is still rigidly coupled to the physical device. Even if the virtual machine is successfully migrated from one subnet to another, you still need to change its IP address, and in the process, there will inevitably be downtime. In addition, many policies are usually based on addresses. When the address changes, the strategy changes, so it is still manual, complicated and error-prone. Therefore, to achieve a complete virtual machine migration, it is necessary to decouple logical objects (such as IP addresses) from physical network devices without changing any existing configuration. This is an example. In a word, the aim is to realize the migration of VM anywhere in the data center without loss, especially in the multi-tenant environment such as cloud, to provide a complete network view for each tenant, to realize a real agile business model, and to attract more people to join the cloud computing.
SDN is not the only way to virtualize the network. In fact, many companies are using the method of network overly(mac in mac, ip in ip), such as Microsoft NVGRE, Cisco /VMWare VXLAN, Cisco OTV, Nisilla STT and so on. In fact, overly network seems to have become the standard practice of NV implementation, and the NV implementation in SDN mode is currently more in academic and research fields. New technology is always accompanied by a large number of competitors, who all want a piece of the action and even eventually become the standard. The drama has just been staged, and I believe it will get more and more exciting.
Personally, I think this is a very interesting topic. I hope everyone can communicate and learn from each other.
The goal of NV is how to present a complete network for each tenant in the cloud environment. Tenants may need to use any IP address segment they want to use, any topology, and of course they don't want to change their original IP address when migrating to the public cloud, because it means downtime. Therefore, customers want to have a secure and completely isolated network environment to ensure that there will be no conflict with other tenants. Since vMotion and other functions can make virtual machines drift freely online in the cloud, can the network drift with it? Here is a brief introduction to Microsoft's Hyper-V networking virtualization, not because of how advanced the technology is, but because its implementation details are relatively open, and the specific practices of other companies are relatively closed, so it is difficult to give examples.
In fact, Microsoft's idea is very simple, that is, the original layer 2 frame of the virtual machine is encapsulated into an IP packet again through NVGRE for transmission, so that the switch can judge the final destination of the packet by identifying the key fields of NVGRE. This is actually a practice of network coverage, which separates the virtual network from the physical network. Imagine that both Company A and Company B have moved to the public cloud, and it happens that some of their virtual machines are connected to the same physical switch. The problem now is that their respective virtual machines originally used the same private IP segment, and if there is no VLAN, it will lead to IP conflicts. But now it seems that this is no longer a problem, because the communication between virtual machines is encapsulated by NVGRE, and the new IP packets are transmitted on the physical network in the physical address space, which is monopolized by the cloud service provider, so there is no IP conflict.
To sum up, network virtualization here can be regarded as IP address virtualization, which completely isolates the IP of the virtual network from the physical network, avoids IP conflicts, and migrates virtual machines online across subnets. Microsoft's requirement is that virtual machines can move freely in the data center, and customers have no feeling, which brings great flexibility.
Software Defined Network (SDN) is a computer network method, which evolved from the work completed by the University of California at Berkeley and Stanford University around 2008. [1] SDN allows network administrators to manage network services by abstracting low-level functions. This is achieved by separating the system that decides where to send the traffic (control plane) from the underlying system that forwards the traffic to the selected destination (data plane). Inventors and vendors of these systems claim that this simplifies the network. [2]
SDN requires the control plane to communicate with the data plane in some way. One such mechanism, OpenFlow, is often misunderstood as equivalent to SDN, but other mechanisms can also conform to this concept. The Open Network Foundation was established to promote the development of SDN and OpenFlow, and to promote the use of the term cloud computing before it became popular.
No references or sources are cited in this section. Please help improve this section by adding references from reliable sources. Non-original data may be questioned and deleted. (February 20 13)
One application of SDN is Infrastructure as a Service (IaaS).
This expansion means that SDN virtual network can simulate elastic resource allocation by combining virtual computing (VMs) and virtual storage, just like every such enterprise application is like Google or Facebook application. In most of these applications, resource allocation is statically mapped in IPC. However, if this mapping can be extended or reduced to a large (many cores) or small virtual machine, its behavior will be very similar to one of the large-scale Internet applications specially built.
Other uses of consolidating data centers include consolidating idle capacity in static partitions of racks into racks. Pooling these spare capacities will significantly reduce computing resources. Pooling active resources can improve the average utilization rate.
The use of SDN distributed and global edge control also includes the ability to balance loads on a large number of links from racks to data center switching equipment. Without SDN, this task is accomplished by using traditional link state update, which will update all locations when any location changes. Distributed global SDN measurement can extend the upper limit of physical cluster size. Other data center uses listed include distributed application load balancing, distributed firewalls, and similar adjustments to the original network functions caused by dynamic, arbitrary location or rack allocation of computing resources.
Other uses of SDN in enterprise or operator managed network services (MNS) are to solve the traditional geographically distributed campus network. These environments are always challenged by the complexity of move-add-change, merger and reorganization. Acquisition and movement of users. According to SDN principle, it departments hope that these identity and policy management challenges can be solved by global definition and separated from the physical interface of network infrastructure. On the other hand, the infrastructure of potentially thousands of switches and routers can remain intact.
It has been noticed that this "coverage" method is likely to lead to low efficiency and low performance because it ignores the characteristics of the underlying infrastructure. Therefore, operators have found the gaps in coverage, and require SDN solutions that consider traffic, topology and equipment to fill these gaps. [7]
SDN deployment model [edit]
No references or sources are cited in this section. Please help improve this section by adding references from reliable sources. Non-original data may be questioned and deleted. (February 20 13)
Symmetry and asymmetry
In the asymmetric model, the global information of SDN is concentrated as much as possible, and the edge drivers are dispersed as much as possible. The consideration behind this method is clear: centralization makes global integration easier, while distribution reduces the pressure of SDN traffic aggregation and encapsulation. However, this model raises questions about the exact relationship between these very different types of SDN elements in terms of consistency, simplicity of horizontal expansion and high availability in multiple locations, which will not appear when using the traditional as-based network model. In the symmetrically distributed SDN model, efforts are made to increase the global information distribution ability and SDN aggregation performance ability, so that SDN components are basically one type of components. As long as there is network reachability between any subset, a group of such elements can form SDN coverage.
No flood and flood
In the model based on flooding, a large amount of global information sharing is realized by using well-known broadcast and multicast mechanisms. This helps to make the SDN model more symmetrical and make use of the existing transparent bridging principle of dynamic encapsulation to realize global cognition and identity learning. One disadvantage of this method is that with the increase of locations, the load of each location will also increase, which will reduce scalability. In flood-free mode, all forwarding is based on global exact matching, which is usually realized by using distributed hash and distributed cache of SDN lookup table.
Host-based and Network-centric
In the host-based model, it is assumed that SDN is used to achieve flexibility in a data center with a large number of virtual machines moving. Under this assumption, the SDN encapsulation process has been completed on the hypervisor representing the local virtual machine. This design reduces the pressure of SDN edg e traffic and uses "free" processing according to the spare core capacity of each host. In the network-centric design, the boundary between network edge and endpoint is clearer. This SDN edge is associated with access outside the top-of-rack devices and host endpoints. This is a more traditional networking method, which does not rely on endpoints to perform any routing functions.
Some boundaries between these design models may not be completely clear. For example, in a data center with a computing structure, a "large" host with a large number of C PU cards also performs some topological access functions, and can centralize SDN Edge functions on behalf of all CPU cards in the chassis. This will be a host-based and network-centric design. There may also be dependencies between these design variants. For example, host-based implementations usually require asymmetric centralized lookup or orchestration services to help organize large-scale distribution. Symmetrical and flood-free implementation models usually require SDN aggregation in the network to realize the search and distribution of a reasonable number of edge points. This concentration depends on the local OpenFlow interface to maintain the traffic encapsulation pressure. [5] [6]
(1) What is Montessori teaching?
Montessori teaching has taught children: first, they have strong mathematical ability, because Montessori has a set of relatively co