FHO5000 Series OTDR FAQ
1. FAQ FOR ALL THE OTDR 2
1) WHAT IS OTDR IS SHORT FOR, AND WHAT IS THE MAIN FUNCTION FOR IT? 2
2) WHAT IS THE BASIC FUNCTION FOR THE OTDR? 2
3) WHAT’S THE BASIC FEATURE THAT AN OTDR SHOULD HAVE? 2
4) HOW TO SELECT AN OTDR? 2
5) HOW MANY OTDR MANUFACTURERS IN THE MARKET? 2
6) WHAT IS THE DYNAMIC RANGE 3
7) WHAT IS EVENT DEAD ZONE AND ATTENUATION DEAD ZONE 3
8) WHAT IS THE PULSE, AND HOW TO CHOOSE PULSE WIDTH BASED ON THE FIBER LENGTH 3
9) WHAT IS OTDR RESOLUTION 3
10) CAN I USE SM OTDR TO TEST MM FIBER 3
11) WHAT IS OTDR LAUNCH CABLE, WHY DO I NEED IT. 4
12) WHAT IS TAIL CORD AND WHY DO I NEED IT. 4
13) WHAT IS AN “ECHO” OR “GHOST” EVENT ON AN OTDR TRACE 4
2. FAQ FOR GRANDWAY FHO5000 OTDR 4
1) WHAT IS THE OPERATING SYSTEM FOR THE OTDR? 4
2) HOW TO DO THE CALIBRATION FOR THE OTDR? 4
3) WHAT MODULES CAN BE ADDED FOR THE FHO5000 OTDR. 4
4) HOW DO I KNOW WHICH RANGE TO SELECT ON MY OTDR 4
5) WHAT PULSE WIDTH SHOULD USE WHEN TESTING THE FIBER 4
6) WHAT ADAPTERS INCLUDED BY FHO5000 OTDR 5
7) HOW TO TEST A BARE FIBER 5
1. FAQ for all the OTDR
1) What is OTDR is short for, and what is the main function for it?
OTDR is short for Optical Time Domain Reflectormeter, The OTDR act likes RADAR, it injects a series pulse (laser) from the OTDR fiber interface and transmits over the optical fiber and detects the returning signal from the fiber backscatter and reflecting from joints (including splicing, active connecting, etc), Based on the return signal, the OTDR generates trace and display on the screen. From the trace, the OTDR device is able to calculate fiber length, attenuation and joint loss for the optical fiber.
2) What is the basic function for the OTDR?
Measure the length for the optical fiber
Measure the optical fiber distance between two sites
Locate fault points and ruptures of the optical fiber
Showing the trace for the optical fiber
Measure the attenuation for the optical fiber cable
Measure the refection of the reflection events for the fiber cable
3) What’s the basic feature that an OTDR should have?
There is distance, loss and reflection figure for each event
It should display the length and attenuation for the whole fiber cable
Large storage function for traces
Easy operation and with GUI interface.
RS232/USB/Network etc to upload data to a PC
PC analysis software to analyzing the stored data
Generate report for the tested traces
Back light for dark and night operation
Built-in VFL (Visual Fault Locator)
4) How to select an OTDR?
Before you buy the OTDR, please evaluate your needs and the skill of the intended users first, ask yourself several questions:
1) Are you installing or maintaining fiber?
2) If Maintenance, is finding the location of the fault the main task?
3) If Installation, do you need measure more than loss and length? E.g. Connector quality, dispersion, Optical Return Loss?
If you get the answer, please visit lick for more details about choose the right OTDR: How to choose the right OTDR?
5) How many OTDR manufacturers in the market?
There are many manufactures like EXFO, JDSU, Fluke, Grandway F2H, Yokogawa, Anritsu, etc.
6) What is the dynamic range
The dynamic range determines the total optical loss that the OTDR can analyze, and the total length of the fiber link can measure unit. The higher the dynamic range, the greater the distance the OTDR can analyze. The specification of the dynamic range must be carefully considered for two reasons as below.
1. OTDR manufacturers specify the dynamic range of ways (playing with specifications as pulse amplitude, signal-to-noise ratio, averaging time, etc.). It is therefore important to understand them thoroughly and avoid making comparisons unsuitable.
2. Having an insufficient dynamic range results in an inability to measure the full link length, affecting, in many cases, the precision of the link loss and connector losses and attenuation far end . A good method is to select an OTDR empirical whose dynamic range is 5 to 8dB higher than the maximum loss you will find.
7) What is Event dead zone and attenuation dead zone
Event Dead Zone: Refers to the minimum necessary for consecutive reflection events can be "solved", ie differentiated from each other. If a reflective event is within the dead zone event that precedes it, it cannot be detected or measured correctly. Industry standard values ranging from 1-5 m for this specification.
Attenuation Dead Zone: Refers to the minimum required distance after a reflective event, for the OTDR to measure a loss of reflective event or reflection. To measure and characterize small links or locate faults in cables and patch cords, it is best to have the attenuation dead zone as small as possible. Industry standard values ranging from 3 to 10 m for this specification.
8) What is the pulse, and how to choose pulse width based on the fiber length
The key is to always use the shortest pulse width possible that will satisfy the trace quality and allow the user to see the end of the fiber. Short pulse widths are used for short fibers. Long PW’s are used on long fibers. If the trace quality exhibits excessive noise that cannot be removed by additional averages, select the next higher pulse width.
9) What is OTDR resolution
The sampling resolution is defined as the minimum distance between two consecutive sampling points acquired by the instrument. This parameter is important because it defines the ultimate distance accuracy and ability OTDR troubleshooting. Depending on the selected pulse amplitude and the range of distance.
10) Can I use SM OTDR to test MM fiber
It may use SM (Single Mode) OTDR to test MM (Multimode) fiber, but not accurate, the distance, cable loss, connector loss, return loss all may not right, because the laser inject from small core diameter fiber to large core diameter fiber, the laser cannot be fully injected, so the test result is not precise.
11) What is OTDR Launch Cable, why do I need it.
An OTDR Launch Cable is able to allow the OTDR to measure the loss and reflection of the first connection in the link. However, it won’t eliminate the ‘dead zone’ after the first connection in the fiber link. We generally recommend 1km launch cable for the fiber network.
12) What is tail cord and why do I need it.
A tail cord is a long distance patch cord connect to the end of the tested fiber link, it creates OTDR back scatter after the final connection in the fiber link under test, to measure the loss and reflection for the last connection in the network
13) What is an “echo” or “ghost” event on an OTDR trace
An echo occurs when the OTDR receives unwanted multiple reflections. Large reflective events are more likely to cause multiple reflections due to large amounts of energy reflected back to the OTDR. Portions of the energy reflected multiple times result in echoes. These waveform artifacts look like real events; however they seldom have loss associated with them.
2. FAQ for Grandway FHO5000 OTDR
1) What is the operating system for the OTDR?
The operating system for the OTDR is Windows CE
2) How to do the calibration for the OTDR?
3) What Modules can be added for the FHO5000 OTDR.
The FHO5000 OTDR support optional power meter module, light source module, micro scope module, able to add touch screen feature, water proof features
4) How do I know which range to select on my OTDR
As for FHO5000 OTDR support Auto testing mode, so it is able to automatically scan the network and set the range setting. The selected range is at least bigger than the total length of the launch cable plus the cable length under test, plus the tail cord. For best result and trace display in the OTDR, the selected length is better about 150% of the total length (total length = launch cable length + cable under test length + tail cord length).
5) What Pulse width should use when testing the fiber
The key is to always use the shortest pulse width possible that will satisfy the trace quality and allow the user to see the end of the fiber. Short pulse widths are used for short fibers. Long PW’s are used on long fibers. If the trace quality exhibits excessive noise that cannot be removed by additional averages, select the next higher pulse width.
For FHO5000 OTDR, please refer to the table below:
6) What Adapters included by FHO5000 OTDR
The standard fiber adapter for FHO5000 OTDR is FC, but optional SC & ST. but for user who want test SC or LC by standard FC adapter, you are suggest to use FC/UPC – SC/UPC or FC/UPC – LC/UPC launch cable.
A second solution is to use a bulkhead on the Fiber Under Test side that is hybrid, for example FC to SC
7) How to test a bare fiber
It is recommended to use a pigtail and mechanical splicer to test the bare fiber. Connect a pigtail of the correct fiber type and connector to the OTDR or far end of a launch cable. Cleave the opposite end of the pigtail and insert it into a mechanical splice. Cleave the end of your fiber to be tested and insert it into the opposite side of the mechanical splice. By using the "Real Time" function available on most OTDRs you can adjust the position of the fibers in the mechanical splice to get the best throughput. The cable is now ready to be scanned.
Fiber Optic Info From China
Providing the newest info of China Industry
2015年4月7日星期二
2013年11月6日星期三
Choosing a Visual Fault Locator
Visual Fault Locator, It could be regarded to be part of OTDR but the fiber fault locator is much cheaper
Fiber visual fault locator is a kind of device which is able to locate the breakpoint, bending or cracking of the fiber glass. It can also locate the fault of OTDR dead zone and make fiber identification from one end to the other end. Designed with a FC,SC,ST universal adapter, this fiber optic visual fault locator is used without any other type of additional adapters, it can locates fault up to 10km in fiber cable, with compact in size, light in weight, red laser output.
Fiber visual fault locators include the pen type, the handheld type and portable visual fiber fault locator. FiberCasa also supply a new kind of Fiber laser tester, it can locates fault up to 30km in fiber cable.
When choosing a visual fault locator, there are several tips for you.
(1)Focus on the launching power and launching range.
Launching power is the lowest power it needs, and launching range it the distance it can detect.
VSL-8 series visual fault locator with 4 launching range, are 5km,12km,14km and 15km. The launching power are 1mW,10mW, 15mW and 30mW.
(2)The size and weight
Visual fault locator is a common and easily fiber tester, it can be used in many area and many position ,so a small size and easily taking is very important.
The human engineering design and its small size give you better feeling of handle. And with the composite plastic shell,it’s super light and easy be taken.
(3)The stability.
The stability is the most important part of a visual fault locator. When a visual fault locator without a stability light source, it can’t work well and please leave it alone.
Visible laser Source is same to Visual Locator. VSL-8 series visual fault locator is d with human engineering design. With its advanced PCB circuit Design, the light of VFL-8 won’t get dim even in a low battery. It using auto power control circuit to make the output laser power more stable.
Choosing a Visual Fault Locator
Visual Fault Locator, It could be regarded to be part of OTDR but the fiber fault locator is much cheaper
Fiber visual fault locator is a kind of device which is able to locate the breakpoint, bending or cracking of the fiber glass. It can also locate the fault of OTDR dead zone and make fiber identification from one end to the other end. Designed with a FC,SC,ST universal adapter, this fiber optic visual fault locator is used without any other type of additional adapters, it can locates fault up to 10km in fiber cable, with compact in size, light in weight, red laser output.
Fiber visual fault locators include the pen type, the handheld type and portable visual fiber fault locator. FiberCasa also supply a new kind of Fiber laser tester, it can locates fault up to 30km in fiber cable.
When choosing a visual fault locator, there are several tips for you.
(1)Focus on the launching power and launching range.
Launching power is the lowest power it needs, and launching range it the distance it can detect.
VSL-8 series visual fault locator with 4 launching range, are 5km,12km,14km and 15km. The launching power are 1mW,10mW, 15mW and 30mW.
(2)The size and weight
Visual fault locator is a common and easily fiber tester, it can be used in many area and many position ,so a small size and easily taking is very important.
The human engineering design and its small size give you better feeling of handle. And with the composite plastic shell,it’s super light and easy be taken.
(3)The stability.
The stability is the most important part of a visual fault locator. When a visual fault locator without a stability light source, it can’t work well and please leave it alone.
Visible laser Source is same to Visual Locator. VSL-8 series visual fault locator is d with human engineering design. With its advanced PCB circuit Design, the light of VFL-8 won’t get dim even in a low battery. It using auto power control circuit to make the output laser power more stable.
Choosing a Visual Fault Locator
Visual Fault Locator, It could be regarded to be part of OTDR but the fiber fault locator is much cheaper
Fiber visual fault locator is a kind of device which is able to locate the breakpoint, bending or cracking of the fiber glass. It can also locate the fault of OTDR dead zone and make fiber identification from one end to the other end. Designed with a FC,SC,ST universal adapter, this fiber optic visual fault locator is used without any other type of additional adapters, it can locates fault up to 10km in fiber cable, with compact in size, light in weight, red laser output.
Fiber visual fault locators include the pen type, the handheld type and portable visual fiber fault locator. FiberCasa also supply a new kind of Fiber laser tester, it can locates fault up to 30km in fiber cable.
When choosing a visual fault locator, there are several tips for you.
(1)Focus on the launching power and launching range.
Launching power is the lowest power it needs, and launching range it the distance it can detect.
VSL-8 series visual fault locator with 4 launching range, are 5km,12km,14km and 15km. The launching power are 1mW,10mW, 15mW and 30mW.
(2)The size and weight
Visual fault locator is a common and easily fiber tester, it can be used in many area and many position ,so a small size and easily taking is very important.
The human engineering design and its small size give you better feeling of handle. And with the composite plastic shell,it’s super light and easy be taken.
(3)The stability.
The stability is the most important part of a visual fault locator. When a visual fault locator without a stability light source, it can’t work well and please leave it alone.
Visible laser Source is same to Visual Locator. VSL-8 series visual fault locator is d with human engineering design. With its advanced PCB circuit Design, the light of VFL-8 won’t get dim even in a low battery. It using auto power control circuit to make the output laser power more stable.
Choosing a Visual Fault Locator
Visual Fault Locator, It could be regarded to be part of OTDR but the fiber fault locator is much cheaper
Fiber visual fault locator is a kind of device which is able to locate the breakpoint, bending or cracking of the fiber glass. It can also locate the fault of OTDR dead zone and make fiber identification from one end to the other end. Designed with a FC,SC,ST universal adapter, this fiber optic visual fault locator is used without any other type of additional adapters, it can locates fault up to 10km in fiber cable, with compact in size, light in weight, red laser output.
Fiber visual fault locators include the pen type, the handheld type and portable visual fiber fault locator. FiberCasa also supply a new kind of Fiber laser tester, it can locates fault up to 30km in fiber cable.
When choosing a visual fault locator, there are several tips for you.
(1)Focus on the launching power and launching range.
Launching power is the lowest power it needs, and launching range it the distance it can detect.
VSL-8 series visual fault locator with 4 launching range, are 5km,12km,14km and 15km. The launching power are 1mW,10mW, 15mW and 30mW.
(2)The size and weight
Visual fault locator is a common and easily fiber tester, it can be used in many area and many position ,so a small size and easily taking is very important.
The human engineering design and its small size give you better feeling of handle. And with the composite plastic shell,it’s super light and easy be taken.
(3)The stability.
The stability is the most important part of a visual fault locator. When a visual fault locator without a stability light source, it can’t work well and please leave it alone.
Visible laser Source is same to Visual Locator. VSL-8 series visual fault locator is d with human engineering design. With its advanced PCB circuit Design, the light of VFL-8 won’t get dim even in a low battery. It using auto power control circuit to make the output laser power more stable.
2013年11月5日星期二
Several fiber optic Devices
1. Fiber Coupler
Fiber Optic Coupler, also called fiber optic adapter, is used for connecting and coupling of optical fiber connectors. According to the connection header of optical fiber connector to select model. The joint structure can be divided into: FC, SC, ST, LC, MTRJ, MPO, MU, SMA, DDI, DIN4, D4, E2000 forms, with good sintering technology to ensure excellent strength and stability (200 ~ 600gf insertion force).
Applications Of Fiber Optic Coupler
Fiber communication network
Broadband access network
Optical CATV
Optical instruments
LAN
Broadband access network
Optical CATV
Optical instruments
LAN
2. Fiber Termination Box
Cable termination box, also known as optical fiber termination box or fiber termination box, is a connection device between several cores cables and termination equipments, mainly used to fix the cable termination, store and protect the remaining fiber optics, the splicing of fiber optic cable and fiber pigtail.
3. Fusion Splicer
Fusion splicer, the connection of two optical fiber cables, should joint the fiber inside the cable, because the fiber is just like glass, must re-fused special joint on the two ends, then the ends melt together, so that the light signal can be passed.
Light transmitting in fiber causes a loss, this loss is mainly composed of transmission loss of optical fiber itself and the splicing loss at optical fiber joints. Upon the order of optical cable, its own fiber optic transmission loss is also basically identified. The fiber joints splicing loss is determined by fiber optic itself and on-site construction. Efforts to reduce the optical fiber joints splice loss, can increase the transmission distance of optical fiber amplifier and improve the attenuation margin of fiber link.
4. Fiber Media Converter
Fiber optic media converter, is an Ethernet transmission media conversion unit to interchange the twisted-pair electrical signal of short distance and light signal of long distance.
Fiber converters are generally used in actual network environment where Ethernet cable can not cover and must use fiber optic to extend the transmission distance, the access layer application and usually located in metropolitan area networks; while it also plays a huge role in helping the fiber at the last kilometer connecting to the metro network and more outer layer network.
5. Fiber Optic Multiplexer
Fiber Optic Multiplexer is a fiber communication equipment to extend data transmission, it is mainly through the signal modulation, photoelectric conversion technology, using the optical transmission characteristics to achieve the purpose of remote transmission. Optical multiplexer generally used in pairs, divided into optical transmitter and optical receiver, optical transmitter completes the electrical/light switching, and optical signal is sent for optical fiber transmission; optical receiver mainly converts the light signals from the fiber receiver back into electrical signals, completing the light/electricity conversion. Optical multiplexer is used for remote data transmission.
Optical multiplexers are divided into many types, such as telephone optical multiplexer, Video Multiplexer, Video Audio Multiplexer, Video Data Multiplexer, video Audio Data Multiplexer and so on. And commonly used is Video Multiplexer (especially widely used in security industry).
Optical multiplexer is the terminal equipment of light signal transmission. Its principle is: a photoelectric conversion transmission equipment; put at both ends of the optical cable; one optical transmitter and receiver, just as its name implies multiplexer. So optical transmitter and receiver are used in pairs, usually buy optical multiplexer is said to buy a few pairs, instead of several.
2013年11月4日星期一
How to build Data center
>> What are Data Centers?
Data Centers house critical computing resources in controlled environment and under centralized management, which enable enterprises to operate around the clock or according to their business needs.
These computing resources include:
· Mainframes
· Web and application servers
· File and printer servers
· Messaging servers
· Application software and the operating systems that run them
· Storage subsystems
· Network Infrastructure (IP or Storage-Area Network (SAN))
Applications range from internal financial and human resources to external e-commerce and business-to-business applications.
Additionally, a number of servers support network operations and network-based applications.
Network operation applications include:
· Network Time Protocol (NTP)
· TN3270
· FTP
· Domain Name System (DNS)
· Dynamic Host Configuration Protocol (DHCP)
· Simple Network Management Protocol (SNMP)
· TFTP
· Network File System (NFS)
Network-based applications include:
· IP telephony
· Video streaming over IP
· IP video conferencing
· and so on …
Virtually, every enterprise has one or more Data Centers. Some have evolved rapidly to accommodate various enterprise application environments using distinct operating systems and hardware platforms. The evolution has resulted in complex and disparate environments that are expensive to manage and maintain.
In addition to the application environment, the supporting network infrastructure might not have changed fast enough to be flexible in accommodating ongoing redundancy, scalability, security, and management requirements.
A Data Center network design lacking in any of these areas risks not being able to sustain the expected service level agreements (SLAs). Data Center downtime, service degradation, or the inability to roll new services implies that SLAs are not met, which leads to a loss of access to critical resources and a quantifiable impact on normal business operation. The impact could be as simple as increased response time or as severe as loss of data.
>> Data Center Goals
The benefits provided by a Data Center include traditional business-oriented goals such as the support for business operations around the clock (resiliency), lowering the total cost of operation and the maintenance needed to sustain the business function (total cost of ownership), and the rapid deployment of applications and consolidation of computing resources (flexibility).
These business goals generate a number of information technology (IT) initiatives, including:
· Business continuance
· Increased security in the Data Center
· Application, server, and Data Center consolidation
· Integration of applications whether client/server and multitier (n-tier), or web services –related applications
· Storage consolidation
These IT initiatives are a combination of the need to address short-term problems and establishing a long-term strategic direction, all of which require an architectural approach to avoid unnecessary instability if the Data Center network is not flexible enough to accommodate future changes.
The design criteria are:
· Availability
· Scalability
· Security
· Performance
· Manageability
These design criteria are applied to these distinct functional areas of a Data Center network:
· Infrastructure services – Routing, switching, and server-farm architecture
· Application services – Load balancing, Secure Socket Layer (SSL) offloading, and caching
· Security services – Packet filtering and inspection, intrusion detection, and intrusion prevention
· Storage services – SAN architecture, Fibre Channel switching, backup, and archival
· Business continuance – SAN extension, site selection, and Data Center interconnectivity
>> Data Center Facilities
Because Data Centers house critical computing resources, enterprises must make special arrangements with respect to both the facilities that house the equipment and the personnel required for a 24-by-7 operation.
These facilities are likely to support a high concentration of server resources and network infrastructure. The demands posed by these resources, coupled with the business criticality of the applications, create the need to address the following areas:
· Power capacity
· Cooling capacity
· Cabling
· Temperature and humidity controls
· Fire and smoke systems
· Physical security: restricted access and surveillance systems
· Rack space and raised floors
>> Roles of Data Centers in the Enterprise
Figure 1-1 presents the different building blocks used in the typical enterprise network and illustrates the location of the Data Center within that architecture.
The building blocks of this typical enterprise network include:
· Campus network
· Private WAN
· Remote access
· Internet server farm
· Extranet server farm
· Intranet server farm
Data Centers typically house many components that support the infrastructure building blocks, such as the core switches of the campus network or the edge routers of the private WAN.
Data Center designs can include any or all of the building blocks in Figure 1-1, including any or all server farm types. Each type of server farm can be a separate physical entity, depending on the business requirements of the enterprise.
For example, a company might build a single Data Center and share all resources, such as servers, firewalls, routers, switches, and so on. Another company might require that the three server farms be physically separated with no shared equipment.
Enterprise applications typically focus on one of the following major business areas:
· Customer relationship management (CRM)
· Enterprise Resource Planning (ERP)
· Supply chain management (SCM)
· Sales force automation (SFA)
· Order processing
· E-commerce
>> Roles of Data Centers in the Service Provider Environment
Data Centers in service provider (SP) environments, known as Internet Data Centers (IDCs), unlike in enterprise environments, are the source of revenue that supports collocated server farms for enterprise customers.
The SP Data Center is a service-oriented environment built to house, or host, an enterprise customer’s application environment under tightly controlled SLAs for uptime and availability. Enterprises also build IDCs when the sole reason for the Data Center is to support Internet-facing applications.
The IDCs are separated from the SP internal Data Centers that support the internal business applications environments.
Whether built for internal facing or collocated applications, application environments follow specific application architectural models such as the classic client/server or the n-tier model.
>> The Client/Server Model and Its Evolution
The classic client/server model describes the communication between an application and a user through the use of a server and a client. The classic client/server model consists of the following:
· A thick client that provides a graphical user interface (GUI) on top of an application or business logic where some processing occurs
· A server where the remaining business logic resides
Thick client is an expression referring to the complexity of the business logic (software) required on the client side and the necessary hardware to support it.
A thick client is then a portion of the application code running at the client’s computer that has the responsibility of retrieving data from the server and presenting it to the client. The thick client code requires a fair amount of processing capacity and resources to run in addition to the management overhead caused by loading and maintaining it on the client base.
The server side is a single server running the presentation, application, and database code that uses multiple internal processes to communicate information across these distinct functions.
The exchange of information between client and server is mostly data because the thick client performs local presentation functions so that the end user can interact with the application using a local user interface.
Client/server applications are still widely used, yet the client and server use proprietary interfaces and message formats that different applications cannot easily share.
Part of a Figure 1-2 shows the client/server model.
The most fundamental changes to the thick client and single-server model started when web-based applications first appeared.
Web-based applications reply on more standard interfaces and message formats where applications are easier to share. HTML and HTTP provide a standard framework that allows generic clients such as web browsers to communicate with generic applications as long as they use web servers for the presentation function.
HTML describes how the client should render the data; HTTP is the transport protocol used to carry HTML data. Microsoft Internet Explorer is an example of client (web browser); Apache, Microsoft Internet Information Server (IIS) are examples of web servers.
The migration from the classic client/server to a web-based architecture implies the use of thin clients (web browsers), web servers, application servers, and database servers.
The web browser interacts with web servers and application servers, and the web servers interact with application servers and database servers. These distinct functions supported by the servers are referred to as tiers, which, in addition to the client tier, refer to the n-tier model.
>> The n-Tier Model
Part b of Figure 1-2 shows the n-tier model. Figure 1-2 presents the evolution from the class client/server model to the n-tier model.
The client/server model uses the thick client with its own business logic and GUI to interact with a server that provides the counterpart business logic and database functions on the same physical device.
The n-tier model uses a thin client and a web browser to access the data in many different ways. The server side of the n-tier model is divided into distinct functional areas that include the web, application, and database servers.
The n-tier model relies on a standard web architecture where the web browser formats and presents the information retrieved from the web server. The server side in the web architecture consists of multiple and distinct servers that are functionally separate. The n-tier model can be the client and a web server; or the client, the web server, and an application server; or the client, web, application, and database servers. This model is more scalable and manageable, and even though it is more complex than the classic client/server model, it enables application environments to evolve toward distributed computing environments.
The n-tier model makes a significant step in the evolution of distributed computing from the classic client/server model. The n-tier model provides a mechanism to increase performance and maintainability of client/server applications while the control and management of application code is simplified.
Figure 1-3 introduces the n-tier model and maps each tier to a partial list of currently available technologies at each tier.
Notice that the client-facing servers provide the interface to access the business logic at the application tier. Although some applications provide a non-web-based front end, current trends indicate the process of “web-transforming” business applications is well underway.
This process implies that the front end relies on a web-based interface to face the users which interacts with a middle layer of applications that obtain data from the back-end system.
These middle tier applications and the back-end database systems are distinct pieces of logic that perform specific functions. The logical separation of front-end application and back-end functions has enable their physical separation. The implications are that the web and application servers, as well as application and database servers, no longer have to coexist in the same physical server. This separation increases the scalability of the services and eases the management of large-scale server farms. From a network perspective, these groups of servers performing distinct functions could also be physically separated into different network segments for security and manageability reasons.
>> Multitier Architecture Application Environment
Multitier architectures refer to the Data Center server farms supporting applications that provide a logical and physical separation between between various application functions, such as web, application, and database (n-tier model).
The network architecture is then dictated by the requirements of application in use and their specific availability, scalability, and security and management goals. For each server-side tier, there is a one-to-one mapping to network segment that supports the specific application function and its requirements. Because the resulting network segments are closely aligned with the tiered applications, they are described in reference to the different application tiers.
Figure 1-4 presents the mapping from the n-tier model to the supporting network segments used in a multitier design.
The web server tier is mapped to the front-end segment, the business logic to the application segment, and the database tier to the back-end segment.
Notice that all the segments supporting the server farm connect to the access layer switches, which in a multitier architecture are different access switches supporting the various server functions.
The evolution of application architectures and departing from multitier application environments still requires a network to support the interaction between the communicating entities.
>> Data Center Architecture
The enterprise Data Center architecture is inclusive of many functional areas, as presented earlier in Figure 1-1.
The focus of this section is the architecture of a generic enterprise Data Center connected to the Internet and supporting an intranet server farm.
Other types of server farms follow the same architecture used for intranet server farms yet with different scalability, security, and management requirements.
Figure 1-5 introduces the topology of the Data Center architecture.
Figure 1-5 shows a fully redundant enterprise Data Center supporting the following areas:
· No single-point of failure – redundant components
· Redundant Data Centers
The core connectivity functions supported by Data Centers are Internet Edge connectivity, campus connectivity, and server-farm connectivity, as presented by Figure 1-5.
Internet Edge
The Internet Edge provides the connectivity from the enterprise to the Internet and its associated redundancy and security functions, as following:
· Redundant connections to different service providers
· External and internal routing through exterior border gateway protocol (EBGP) and interior border gateway protocol (IBGP)
· Edge security to control access from the Internet
· Control for access to the Internet from the enterprise clients
Campus Core Switches
The campus core switches provide connectivity between the Internet Edge, the intranet server farms, the campus network, and the private WAN.
The core switches physically connect to the devices that provide access to other major network areas, such as the private WAN edge routers, the server-farm aggregation switches, and campus distribution switches.
Network Layers of the Server Farm
As depicted in Figure 1-6, the following are the network layers of the server farm:
· Aggregation layer
· Access Layer
— Front-end segment
— Application segment
— Back-end segment
— Front-end segment
— Application segment
— Back-end segment
· Storage Layer
· Data Center transport layer
Some of these layers depend on the specific implementation of the n-tier model or the requirements for Data Center-to-Data Center connectivity, which implies that they might not exist in every Data Center implementation.
Although some of the these layers might be optional in the Data Center architecture, they represent the trend in continuing to build highly available and scalable enterprise Data Centers.
This trend specifically applies to the storage and Data Center transport layers supporting storage consolidation, backup and archival consolidation, high-speed mirroring or clustering between remote server farms, and so on.
>> Aggregation Layer
The aggregation layer is the aggregation point for devices that provide services to all server farms. These devices are multilayer switches, firewall, load balancers, and other devices that typically support services across all servers.
The multilayer switches are referred to as aggregation switches because of the aggregation function they perform. Service devices are shared by all server farms. Specific server farms are likely to span multiple access switches for redundancy, thus making the aggregation switches the logical connection point for service devices, instead of the access switches.
As depicted in Figure 1-6, the aggregation switches provide basic infrastructure services and connectivity for other service devices. The aggregation layer is analogous to the traditional distribution layer in the campus network in its Layer 3 and Layer 2 functionality.
The aggregation switches support the traditional switching of packets at Layer 3 and Layer 2 in addition to the protocols and features to support Layer 3 and Layer 2 connectivity.
>> Access Layer
The access layer provides Layer 2 connectivity and Layer 2 features to the server farm. Because in a multitier server farm, each server function could be located on different access switches on different segments, the following section explains the details of each segment.
1. Front-End Segment
The front-end segment consists of Layer 2 switches, security devices or features, and the front-end server farms.
The front-end segment is analogous to the traditional access layer of the hierarchical campus network design and provides the same functionality.
The access switches are connected to the aggregation switches in the manner depicted in Figure 1-6.
The front-end server farms typically include FTP, Telnet, TN3270 (mainframe terminals), Simple Mail Transport Protocol (SMTP), web servers, DNS servers, and other business application servers, in addition to network-based application servers such as IP television (IPTV) broadcast servers and IP telephony call managers that are not placed at the aggregation layer because of port density or other design requirements.
The specific network features required in the front-end segment depend on the servers and their functions. For example, if a network support video streaming over IP, it might require multicast, or if it support Voice over IP (VoIP), quality of service (QoS) must be enabled.
The need for Layer 2 adjacency is the result of Network Address Translation (NAT) and other header rewrite functions performed by load balancers or firewalls on traffic destined to the server farm. The return traffic must be processed by the same device that performed the header rewrite operations.
Layer 2 connectivity is also required between servers that use clustering for high availability or require communicating on the same subnet. This requirement implies that multiple access switches supporting front-end servers can support the same set of VLANs to provide layer adjacency between them.
Security features include Address Resolution Protocol (ARP) inspection, broadcast suppression, private VLANs, and others that are enable to counteract Layer 2 attacks.
Security devices include network-based intrusion detection system (IDSs) and host-based IDSs to monitor and detect intruders and prevent vulnerabilities from being exploited. In general, the infrastructure components such as the Layer 2 switches provide intelligent network services that enable front-end servers to provide their functions.
Note that the front-end servers are typically taxed in their I/O and CPU capabilities. For I/O, this strain is a direct result of serving content to the end users; for CPU, it is the connection rate and the number of concurrent connections needed to be processed.
Scaling mechanisms for front-end servers typically include adding more servers with identical content and then equally distributing the load they receive using load balancers.
Load balancers distribute the load (or load balance) based on Layer 4 or Layer 5 information. Layer 4 is widely used for front-end servers to sustain a high connection rate without necessarily overwhelming the servers.
Scaling mechanisms for web servers also include the use of SSL offloaders and Reverse Proxy Caching (RPC).
2. Application Segment
The application segment has the same network infrastructure components as the front-end segment and the application servers.
The features required by the application segment are almost identical to those needed in the front-end segment, albeit with additional security.
This segment relies strictly on Layer 2 connectivity, yet the additional security is a direct requirement of how much protection the application servers need because they have direct access to the database systems.
Depending on the security policies, this segment uses firewalls between web and application servers, IDSs, and host IDSs. Like the front-end segment, the application segment infrastructure must support intelligent network services as a direct result of the functions provided by the application services.
Application servers run a portion of the software used by business applications and provide the communication logic between the front end and the back end, which is typically referred to as middleware or business logic.
Application servers translate user requests to commands that the back-end database systems understand. Increasing the security at this segment focuses on controlling the protocols used between the front-end servers and the application servers to avoid trust exploitation and attacks that exploit known application vulnerabilities.
Figure 1-7 introduces the front-end, application, and back-end segments in a logical topology.
Note that the application servers are typically CPU-stressed because they need to support the business logic. Scaling mechanisms for application servers also include load balancers. Load balancers can select the right application based on Layer 5 information.
Deep packet inspection on load balancers allow the partitioning of application server farms by content. Some server farms could be dedicated to selecting a server farm based on the scripting language (.cgi, .jsp, and so on). This arrangement allows application administrators to control and manage the server behavior more efficiently.
3. Back-End Segment
The back-end segment is the same as the previous two segments except that it support the connectivity to database servers. The back-end segment features are almost identical to those at the application segment, yet the security considerations are more stringent and aim at protecting the data, critical or not.
The hardware supporting the database systems ranges from medium-sized servers to high-end servers, some with direct locally attached storage and others using disk arrays attached to a SAN.
When the storage is separated, the database server is connected to both the Ethernet switch and the SAN. The connection to the SAN is through a Fibre Channel interface. Figure 1-8 presents the back-end segment in reference to the storage layers. Notice the connections from the database server to the back-end segment and the storage layer.
Note that in other connectivity alternatives, the security requirements do not call for physical separation between the different server tiers.
>> Storage Layer
The storage layer consists of the storage infrastructure such as Fibre Channel switches and routers that support small computer system interface (SCSI) over IP (iSCSI) or Fibre Channel over IP (FCIP). Storage network devices provide the connectivity to servers, storage devices such as disk subsystems, and tape subsystems.
SAN environments in Data Centers commonly use Fibre Channel to connect servers to the storage device and to transmit SCSI commands between them. Storage networks allow the transport of SCSI command over the network. This transport is possible over the Fibre Channel infrastructure or over IP using FCIP and iSCSI.
FCIP and iSCSI are the emerging Internet Engineering Task Force (IETF) standards that enable SCSI access and connectivity over IP.
The network used by these storage devices is referred as a SAN. The Data Center is the location where the consolidation of applications, servers, and storage occurs and where the highest concentration of servers is likely, thus where SANs are located. The current trends in server and storage consolidation are the result of the need for increased efficiency in the application environments and for lower costs of operation.
Data Center environments are expected to support high-speed communication between servers and storage and between storage devices. These high-speed environments require block-level access to the information supported by SAN technology.
There are also requirements to support file-level access specifically for applications that use Network Attached Storage (NAS) technology. Figure 1-8 introduces the storage layer and the typical elements of single and distributed Data Center environments.
Figure 1-8 shows a number of database servers as well as tape and disk arrays connected to the Fibre Channel switches.
Severs connected to the Fibre Channel switches are typically critical servers and always dual-homed. Other common alternatives to increase availability include mirroring, replication, and clustering between database systems or storage devices.
These alternatives typically require the data to be housed in multiple facilities, thus lowering the likelihood of a site failure preventing normal systems operation.
Site failures are recovered by replicas of the data at different sites, thus creating the need for distributed Data Centers and distributed server farms and the obvious transport technologies to enable communication between them.
>> Data Center Transport Layer
The Data Center transport layer includes the transport technologies required for the following purposes:
· Communication between distributed Data Centers for rerouting client-to-server traffic
· Communication between distributed server farms located in distributed Data Centers for the purposes of remote mirroring, replication, or clustering
Transport technologies must support a wide range of requirements for bandwidth and latency depending on the traffic profiles, which imply a number of media types ranging from Ethernet to Fibre Channel.
For user-to-server communication, the possible technologies include Frame Relay, ATM, DS channels in the form of T1/E1 circuits, Metro Ethernet, and SONET.
For server-to-server and storage-to-storage communication, the technologies required are dictated by server media types and the transport technology that support them transparently. For example, as depicted in Figure 1-8, storage devices use Fibre Channel and Enterprise Systems Connectivity (ESCON), which should be supported by the metro optical transport infrastructure between the distributed server farms.
If ATM and Gigabit Ethernet (GE) are used between distributed server farms, the metro optical transport could consolidate the use of fiber more efficiently. For example, instead of having dedicated fiber for ESCON, GE, and ATM, the metro optical technology could transport them concurrently.
The likely transport technologies are dark fiber, coarse wavelength division multiplexing (CWDM), and dense wavelength division multiplexing (DWDM), which offer transparent connectivity (Layer 1 transport) between distributed Data Centers for media types such as GE, Fibre Channel, ESCON, and fiber connectivity (FICON).
订阅:
博文 (Atom)