ReviewEssays.com - Term Papers, Book Reports, Research Papers and College Essays
Search

Determinism and Ethernet

Essay by   •  December 24, 2010  •  Research Paper  •  2,586 Words (11 Pages)  •  1,266 Views

Essay Preview: Determinism and Ethernet

Report this essay
Page 1 of 11

Determinism and Ethernet

With the emergence of Ethernet as an industrial Fieldbus, many detractors have started to question whether Ethernet is up to the task of being a control network and, in particular, whether Ethernet can be considered deterministic. In this article I will explain some of the recent developments which have taken place since Ethernet was invented in 1972 which allow us to consider a properly planned and installed Ethernet network, deterministic.

The start of Ethernet

Ethernet, as we know it, was first invented by Bob Metcalf of Xerox's Palo Alto Research Centre almost 30 years ago. However, it has its roots a few years before that with some pioneering work done by Norman Abramson at the University of Hawaii in 1967.

Abramson had the task of getting the university mainframe talking to some of the outlying terminals which were located on some of the other islands of Hawaii. Running a physical cable to them was out of the question and so Abramson looked at using radio. However, due to the frequencies that he was forced to use, he did not have enough different frequencies for all the terminals that he had - so some terminals would have to share. However, the problem with normal shared radio frequencies is that interference occurs and so Abramson realised he would have find a way of regulating the transmission of data.

Abramson came up with the idea of using just two frequencies for all communication, and establishing rules that would dictate when and what a terminal or the mainframe would send. For outbound transmission from the mainframe one frequency would be used and for inbound transmission from the terminals another frequency would be used. So Abramson developed a system of addresses and replies for communication.

The mainframe would send out a message with its address set to one of the terminals. Although all terminals would get it, only the one it was addressed to would actually pass it for further processing. The others would just discard it. Upon receiving the message, the terminal would then check that no other terminals were using the frequency. Then it would transmit a receipt for the message which would normally only be received by the mainframe and other nearby terminals. This worked in both directions, ie for messages sent out by the mainframe and for messages sent out by the terminals.

But what if two or more terminals transmitted messages at the same time? Well, some interference, or a collision, would occur and the mainframe would not receive the message and would not send out a reply to the terminals. The terminals had a timeout period built into them, typically 200 to 1500 nanoseconds, that if they had not received the reply within that period, then they would send the message again. There was also a maximum limit placed on the number of retries the terminal would attempt before reporting an error to the operator or user.

This process or method is known as a CSMA/CD model (or Carrier Sense, Multiple Access, Collision Detect model).

A few years later, when Bob Metcalf was tasked with connecting Xerox's latest invention (a laser printer) to one of their other inventions (a PC) he decided against running a cable from each PC to the laser printer. Instead, looking through some recent developments in communications he came across Abramson's work and, with a bit of re-engineering, was able to transfer it so that it ran on coaxial cable and quite a bit faster. (The Aloha network at the University of Hawaii had a bandwidth of 4800bps - Metcalf got the Ethernet network at PARC up to 2.94Mbps) To improve efficiency, although he stuck with the CSMA/CD model, he changed it slightly so that it did not send replies as a way of detecting collisions. Instead, as the system was running on a copper cable, Metcalf looked at the actual voltage on the cable and when the voltage jumped by a predetermined offset, a collision had occurred. This voltage jump was easily detected by all interfaces on the cable and the sending terminal could then retry after a short delay, known as the backoff time. Ethernet was born!

It is at this point that most arguments about the determinism of Ethernet start and finish. The system described above is obviously not deterministic. You could almost never know how long a message was going to take to arrive because you had no way of knowing the other traffic on the network. However, Ethernet development has not stood still since 1972 - rather it has increased in pace during recent years.

The speed of Ethernet

One of the main advantages of Ethernet over almost every other network type is its speed. A common phrase in the networking industry at the moment is "fat pipes". This refers to the bandwidth of the connection between two devices. As mentioned above, Ethernet at PARC ran at 2.94Mbps. When Microsoft introduced Windows for Workgroups 3.11 in 1993, the coax cable and Network Interface Cards (or NICs) supplied in the box ran at 10Mbps. These days, most office networks will run at 100Mbps or Fast Ethernet. However, the real speed (or the really fat pipes) is between Ethernet switches. These run at 1Gbps. Later this year the IEEE, the governing body for Ethernet, will announce the standards for 10Gbps Ethernet. After that, who knows? However, the main point is that Ethernet is a lot faster than any other network whether office or factory based. Ask your PLC manufacturer how fast their preferred network is, but be prepared for a very low number. What this means for the determinism detractors is that as everything is going so quickly, any time delay caused by waiting for another device to finish talking is almost negligible. We haven't finished yet though...

Figure 1 Ethernet has not slowed down.

Improving on bus topologies

Ethernet at PARC ran on very thick, yellow, coax cable and used a bus topology. That is, each device was connected to a long run of coax cable. There were rules about how often a device to be connected to the cable and how long the total cable run could be. With Thick Ethernet (10BASE5) a device could be connected every 2.5m, with a maximum cable run of 500m. For most users this was very limiting and a new, better way had to be found.

The first of these was changing to thin coax cable. Thin Ethernet or Cheapernet (10BASE2) was invented in 1982. Thin Ethernet meant the minimum distance between nodes was now down to 0.5m but the total distance was also reduced to 185m. However, speed was increased to 10Mbps, but that wasn't the major drawback.

...

...

Download as:   txt (15.2 Kb)   pdf (173.3 Kb)   docx (15.5 Kb)  
Continue for 10 more pages »
Only available on ReviewEssays.com