Big Brainy Data Pipe
From his vantage point as entrepreneur-in-residence at Benchmark Capital in fall 2001, Govind Kizhepat watched a steady stream of companies coming through the venture capital firm’s offices looking for finances to launch data-intensive applications. It became clear, Kizhepat recalls, that there was a pressing need to move data around more efficiently and intelligently.
By February 2002, with backing from Benchmark and other venture capital firms, the semiconductor industry veteran had founded NetXen to do just that. NetXen has since grown to a 120-person company operating out of two offices: headquarters in Santa Clara, Calif., and a satellite in Pune, India.
“Immediately after 9/11, there was a data explosion everywhere, in finance, in media and other industries,” Kizhepat says. “Traditional point solutions to transfer data in a particular situation were obviously going to fall short. A radically different architecture was needed.”
Last spring, after four years of design and testing, NetXen rolled out the Intelligent NIC, which the company touts as the first intelligent network interface card for high-volume 10-Gigabit Ethernet deployments. Laying the groundwork for what NetXen describes as the “agile data center,” the integrated hardware-software solution not only offers the fat data conduit of 10 Gig-E for blazing fast I/O processing but also provides built-in layers of programmable intelligence. That means Intelligent NIC, with circuit-board-level and dual-ported blade manifestations of NetXen’s 10 Gig-E accelerators, can evolve with the network and data center.
Full Speed Ahead
NetXen designed its technology to accelerate a broad range of networking protocols — including Transmission Control Protocol/Internet Protocol, Remote Direct Memory Access, Internet SCSI and the iSCSI Extension for RDMA — and the built-in intelligence is intended to optimize business processes, not just manage contention avoidance, Kizhepat says. By shifting I/O processing away from the CPU, the Intelligent NIC also has the benefit of providing up to 50 percent reductions in power consumption and cooling costs, he says.
The product was always intended for volume 10 Gig-E markets, says Kizhepat, and the launch announcement included news of alliances with system manufacturers Hewlett-Packard and IBM, who are incorporating NetXen technology into their equipment. Kizhepat notes that the company’s technology is available for and works with all kinds of network and data center environments.
Staking a claim on the 10 Gig-E market space pits NetXen directly against major players such as Intel, says analyst Anne MacFarland of the Clipper Group.
“I make the analogy that what NetXen does is like what a good sous-chef does in a kitchen,” MacFarland says. “Their role is not really obvious. But by speeding and managing the I/O functions of the system, they can make all servers work better and make all business processes work better — just as the sous-chef improves everything that comes out of the kitchen without getting a lot of credit.”
According to MacFarland, the basic premise of NetXen technology — offloading I/O management from the central processing portion of a system — is not new, but the startup approaches it deftly. The flexibility of NetXen’s approach to 10 Gig-E is a key to the company’s early success, she says.
The technology can be deployed incrementally and offers benefits to users of all sizes, from vast enterprises that must move mountains of data to small businesses looking for speed and economy, MacFarland says. The increase in multimedia applications is making fat data pipes a necessity in many data centers. NetXen also supports virtualization, speeding data rates for virtual machines as well as physical ones and facilitating server consolidation, she says.
NetXen “complements all kinds of technology. It doesn’t matter what hardware you’re using or which operating system,” MacFarland says. “It accelerates whatever you need to have accelerated.”
The power-saving aspects of NetXen’s technology will also become increasingly important as users try to control the costs of their overheated, juice-gulping data centers, says Kizhepat. “Our ability to keep adding firmware and software on the chip makes a real difference,” he says. “The other guys hard-wire on a piece of silicon, and that’s not going to work in the ever-changing world of the Internet.”
Information technology companies can raise special issues for internal IT staff, as the support services come close to the core of the business, says Sanjeev Jorapur, NetXen’s vice president for technology.
For example, the Santa Clara, Calif., company’s three full-time and one part-time IT staff members not only support the basic needs of the 120 employees, but they have to keep nearly 300 servers in the engineering lab running. And that’s counting only the physical boxes, without adding virtual servers to the tally.
“This is an engineering and development lab, and we make technology that goes in servers,” Jorapur says. “Our view is that IT is a service organization within the business, and it has to conform to the nature and culture of the business. The best way for IT to operate and grow in importance is to hear what the organization wants. When it does that well, it lubricates the entire process.”
The engineers on NetXen’s development and test teams largely work on technology from the company’s original equipment manufacturer partners, which include Hewlett-Packard and IBM, so IT is responsible for maintaining multiple platforms, as well as applications for virtual server implementations from VMware and XenSource (which is being acquired by Citrix).
“Part of what we do is testing on a whole gamut of servers, from 1U or 2U to 4U boxes or blade servers, and they all have to be in good order,” Jorapur says.
As would be the case in any enterprise, the IT staff also must keep the company’s business software running, and the lines of communication open on e-mail, Windows Messenger and Skype. NetXen engineers have also started to use a wiki to record the progress of their work and receive feedback from colleagues. A virtual private network tunnel provides a secure connection for NetXen’s satellite office in Pune, India, as well as for employees who work remotely.
“It’s very important to us that when you work remotely, it’s like you are in the same office as everyone else in the company,” says Jorapur. “All our applications have to work the same no matter where we are.”
IT can take a useful leadership role in evaluating technologies to meet the needs expressed by the organization, he says, adding that the staff is weighing the benefits of collaboration tools, for instance.
NetXen looks for IT staffers with multiple skills to match the broad range of responsibilities they’ll cover — a crucial hiring philosophy for any small company, according to Jorapur.
“We’re not much different in some ways than any other small company, even though we have different technical needs,” he says. “People have to wear many hats. Flexibility in skills and work habits are important, along with the willingness to put in time and effort. Those things make [people] really valuable to the company.”
Founder Govind Kizhepat says NetXen tries to “practice what we preach” when it comes to IT.
“Of course we use our own products and test our own stuff, but we also believe that IT should have the agility we’re trying to bring to the data center,” he says.
Late 1960s: Before Ethernet, Norman Abramson leads researchers at the University of Hawaii in creating ALOHAnet. The digital radio network, which established a simple transmission and response protocol for the signals, transmits packets of information between the islands. Interference between signals, or collisions, keeps successful transmission rates below 20 percent.
1972: Robert Metcalfe of Xerox Palo Alto Research Center in Palo Alto, Calif., develops a new system based on algorithms that can predict when collisions will occur and create backoffs — slower transmission speeds — to avoid them. Metcalfe and his colleagues build an experimental version of the new system called Aloha Alto Network that supports a data transmission rate of 2.94 megabits per second between Xerox Alto personal workstations and laser printers. Successful transmission rates climb above 90 percent.
1973: The Xerox PARC team renames its system Ethernet to suggest that the protocol will work on computers other than Altos and that the technology is a leap beyond ALOHAnet.
1976: Metcalfe and David Boggs publish the groundbreaking paper, “Ethernet: Distributed Packet-Switching for LANs.”
1977: Metcalfe receives U.S. Patent 4,063,220 for his “multipoint data communication system with collision detection.”
1979: Digital Equipment, Intel and Xerox join in what is dubbed the DIX cartel to produce Ethernet products.
1983: The Institute of Electronics and Electrical Engineers ratifies Ethernet as standard 802.3.
Early ’80s: Novell, through NetWare, pushes broad acceptance of Ethernet.
Early ’90s: Ethernet becomes the de facto local area network standard, replacing rival protocols such as Token Ring and ARCnet. By the end of the decade, an estimated 80 percent to 90 percent of all LAN implementations rely on Ethernet.
1995: IEEE awards Abramson its Koji Kobayashi Award for “the development of the concept of Aloha systems, which led to modern LANs.”
1995: IEEE ratifies fast-Ethernet standard 802.3u, providing for transmission rates of 100Mbps.
1998: IEEE approves 802.3z, the Gigabit Ethernet standard.
2005: IEEE 802.3-2005 specifies 10 Gig-E.
Future: 40 and 100 Gig-E are in development.
Sources: www.trendcomms.com, and Ethernet: The Definitive Guide, by Charles E. Spurgeon