Monday, January 24, 2011

Five Best Practices for Unified Communications



Background

To meet today’s increasing demands, businesses need to communicate and collaborate more efficiently. Communication needs to be timely and effective, reaching people where and when they want to be reached, at the office, at home or on the go. Collaboration needs to include a broad sweep of individuals, cross geographic and organizational boundaries and be integrated with business processes.

One way to address these needs is with Unified Communications (UC), which brings together the tools of voice, email, messaging and conferencing and integrates them with business applications such as enterprise resource planning (ERP) and customer relationship management (CRM). UC can improve organizational efficiencies, while simultaneously empowering knowledge workers.

The efficiency gains come from the integration and optimization of communication silos, supported by enterprise-wide standards and shared services. Productivity gains are harder to measure, but there’s a clear intuitive benefit that could be realized by reducing human latency. It might be hard to quantify, but we’ve all experienced the frustration of “telephone tag.” With a UC platform, employees can see who’s available at a glance, before placing the call.

Characteristics of Successful UC Projects

Enterprises that have begun migrating toward UC have been experiencing some challenges. For UC to be effective, the entire network must be prepared to manage the applications. The more complex the network, the more difficult it is to roll out UC. Limited platform choices and inflexible pricing models are making choices more challenging for network managers. Return on Investment (ROI) for UC is also hard to provide in dollars and cents, as much of the value comes from improved communications among employees and customers. Early Adopters of UC indicate that successful UC programs share the following characteristics:

• They are often inspired by IT, but are always driven by clear business needs – it’s not just a matter of rolling out the infrastructure.

• They are well supported by existing architectures, and their complexity is acknowledged – programs succeed when they’re supported by detailed plans to manage both technical and organizational change.

• They focus on the smallest practical set of technology choices to minimize interoperability issues

Five Best Practices

Enterprises that are realizing value from their UC programs are succeeding because they’ve followed some basic, common-sense practices. If your organization is considering a move in this direction, here are five best practices to consider:

1. Define a Guiding Vision that will Lead Toward Increased ROI UC depends on network readiness, network and application convergence and integrated wired and wireless access. It also involves a blending of software and platform capabilities, leaving most enterprises with a multi-vendor solution. Managing the integration of disparate communications tools and dealing with the associated re-training programs also makes for a complex transition. Developing the right strategy requires a long-term view, as well as an understanding of the short-term challenges.

2. Include Sufficient Up-Front Planning.

A clear roadmap for a UC implementation can help businesses manage expectations and be sure that time frames are realized. It should recognize that UC is not a software-only concept, and include initiatives aimed at ensuring end-user acceptance. The plan should also consider whether some commodity services might need to be outsourced, so corporate knowledge resources can focus on strategic UC applications.

3. Clearly Align Business and Technical Requirements

Phased migration plans can maximize the value of existing investments in applications, messaging, voice and other supporting infrastructures. Vendor-agnostic product recommendations can help ensure that the design meets an organization’s specific requirements, and UC migration planning should also consider next generation service architectures, such as IP Multimedia Subsystem (IMS).

4. Find the Right Champion for the UC Program

Some programs emerge from IT and seek to introduce new capabilities. Programs may also emerge from business units seeking to establish UC capabilities to support a new product, service or business initiative. Regardless of the champion, there must be a well-developed integration plan and a realistic level of funding.

5. Establish Cross-Functional

Teams to Help Manage the Implementation. These teams can help deal with the complexity of a “meta-technology” environment that includes many different parts, and can develop a single methodology for planning implementation and introduction. Cross-functional teams can also be invaluable when it comes to communicating the benefits across the organization, as well as to customers, partners and suppliers.

Seeing Benefits

Once a UC program is under way, reaping the benefits is ultimately up to the users. An enterprise can make all the right decisions and deliver on a well-thought-out strategy and still not benefit from UC. Employees must be willing to make changes in the way they conduct business and communicate. UC can increase the efficiency of virtual teams, while reducing travel time and expenses, and can

also eliminate some communication barriers, reduce cycle times and improve the quality of day-to-day communication. UC can support the re-engineering of business processes and accelerate process improvement, but only if process owners are willing to evolve. If not addressed, user resistance to change can be a deal-breaker for an otherwise well-planned UC program.

Despite the great promise of UC, it remains a challenging prospect. Standards are still emerging and different vendors offer different approaches. Independent advice can help companies select the strategies, architectures and deployment plans that make sense for them.

(Reference : AT&T)

Friday, January 21, 2011

THINGS YOU SHOULD KNOW ABOUT - DNSSEC

Scenario

When Laura returns to campus after the holiday break, she is shocked to hear that she has been de-registered from classes due to nonpayment of tuition. She calls her parents, who confirm that they paid her bill online in early December. They tell her that when they went to the bursar’s website, the page looked a bit different and asked for information they had previously entered, but the browser displayed the padlock icon indicating a secure connection, so they paid the bill as usual. They assure her that the funds have already been transferred from their bank account. Laura heads over to the bursar’s office, only to find a crowd of students in the same boat. As they talk about their predicament, they discover that they all paid their tuition online and that they all use the same regional ISP.

Further investigation by the university’s IT staff confirms that the students fell victim to DNS cache poisoning—a kind of computer attack in which hackers insert bad data into an ISP’s name server cache, which, as a result, directs Internet traffic from an intended site (in this case, the bursar’s website) to another location. The hackers even purchased an SSL certificate so that the bogus site would have the padlock icon.

The university has to let several hundred students re-register without having yet paid tuition, and the students and their families spend months getting their banks to refund the money that was fraudulently transferred from their accounts.

In the future, as administrators of domains and websites implement DNSSEC, such attacks will be prevented. DNSSEC adds a set of security provisions to the way Internet traffic is routed through name servers, protecting users from the kind of attack Laura suffered. When DNSSEC is implemented, if a user’s computer is redirected to a bogus version of a website, software that manages web traffic will encounter security keys that should match but do not, indicating a problem. In this way, DNSSEC will plug a fundamental weakness of the Internet.

What is it?

Internet-connected devices are identified by IP addresses, though users typically only know web addresses—people can remember “example.edu,” for instance, more easily than “192.168.7.13.” The Domain Name System (DNS) uses a distributed network of name servers to translate text-based web addresses into IP addresses, directing Internet traffic to proper servers. Though invisible to end users, DNS is a basic element of how the Internet functions.

DNS was built without security, however, leaving Internet traffic exposed to forged DNS data, which, among other things, allows the spoofing of addresses to redirect traffic to malicious websites. DNS Security Extensions (DNSSEC) adds security provisions to DNS so that computers can verify that they have been directed to proper servers. DNSSEC authenticates lookups of DNS data (including the mapping of website names to IP addresses) for DNSSEC-enabled domains so that outgoing Internet traffic (including e-mail) is always sent to the correct servers, without the risk of being misdirected to fraudulent sites.

Who’s doing it?

VeriSign administers the “root,” which supports all top-level domains (TLDs) (.com, .net, .info, and so forth), and is expected to implement DNSSEC for the root (“sign the root”) in 2010. Once that happens, DNSSEC traffic can be validated at its highest level—the root. Several nations—including Sweden (.se domain), Brazil (.br), Bulgaria (.bg), and the Czech Republic (.cz)—have implemented the technology for their country-code domains, and the Public Interest Registry has enabled DNSSEC validation for the .org domain. As part of its compliance with the Federal Information Security Management Act of 2002, which requires increased security for the nation’s cyberinfrastructure, the U.S. federal government has implemented DNSSEC for the .gov domain. Until the root is signed, these domains will use a surrogate authority to validate their DNSSEC-enabled web traffic, but all TLDs will eventually use DNSSEC. EDUCAUSE is working with VeriSign to implement DNSSEC for the .edu domain, also in 2010, and this effort is expected to provide guidance about best practices to smooth the transitions of the much-larger .com and .net domains in 2010 and 2011.

How does it work?

As data packets travel over the Internet, DNS provides the “maps” that correlate web addresses with IP addresses and route traffic to proper destinations. Because DNS does not provide a mechanism to authenticate the data in name servers, forged or corrupt data in a name server can direct traffic to the wrong server—a weakness that malicious parties use to their advantage. DNSSEC adds digital signatures that ensure the accuracy of lookup data, guaranteeing that computers can connect to legitimate servers.

With DNSSEC, a series of encryption keys are handed off and authenticated—the second-level domain (SLD) key (from example. edu) is authenticated by the TLD (.edu), and the TLD key is authenticated by the root. In this way, when an SLD, its parent TLD, and the root are all signed, a chain of trust is created. (Holders of SLDs can implement DNSSEC before their TLD or the root is signed, creating so-called “islands of trust” that rely on intermediate measures to validate their encryption keys.) If the encryption keys don’t match, DNSSEC will fail, but because the system is backwards-compatible, the transaction will simply follow standard DNS protocols.

The value of the system will come when the root, the TLDs, and SLDs are signed, allowing DNSSEC to be used for all Internet traffic. At that point, when DNSSEC fails, users will not be routed to bogus servers, and they might also be notified that nonmatching DNSSEC keys prevented their transaction from going through.

Why is it significant?

Hackers continue to exploit the security weakness of DNS to their advantage. By caching address information, name servers don’t have to look up the IP address every time a frequently visited site is accessed, and this speeds up the experience for end users. If hackers are able to insert a bogus IP address into a cache, however, all users of that name server will be directed to the wrong site (until the cache expires and is refreshed). Corrupting the operation of DNS in this way can lead to many kinds of fraud and other malicious activity. By plugging some of the largest security holes in the Internet, DNSSEC has the potential to significantly expand the trustworthiness—and thus the usefulness—of the Internet as a whole.

What are the downsides?

Fully implementing DNSSEC will require an enormous amount of work across every quarter of the Internet—signing the root and the TLDs is simply the tip of the iceberg. Participation is voluntary at this time, and the benefit that DNSSEC ultimately provides will be a reflection of the willingness of domain holders to do that work—that is, the value of DNSSEC will be in direct proportion to the number of sites that implement it. Even after the root and the TLDs are signed, the advantage of DNSSEC will be qualified by uneven rates of adoption. Adding encryption keys to Internet lookups introduces complex logistical problems of managing those keys, such as how to periodically update keys without breaking the way name servers (and their caches) work, and how to accommodate the differing keys and protocols of different TLDs. Name server software is still evolving to support DNSSEC; many organizations will need to update their DNS software, and, in some cases, hardware upgrades will also be required. In addition, DNSSEC might degrade the speed of Internet lookups, resulting in a slower experience for end users. On top of the technical and resource-based challenges are policy issues that will need to be resolved at an international level. The effort to implement DNSSEC for the root has renewed a longstanding debate about where “control of the Internet” resides.

Where is it going?

Having the root and TLDs signed will provide some incentive for domain holders to implement DNSSEC because the chain of trust can be established, but until a critical mass of domains incorporate the technology, the benefits might not seem to justify the effort. Administrators of most TLDs are expected to develop resources to help ease the implementation of DNSSEC for domain holders, but many of the thorniest technical issues—about not only the transition to but also the maintenance of DNSSEC in practice—still need to be sorted out. Presumably, as domains begin implementing DNSSEC in large numbers, momentum will grow and sustain the transition, but it remains to be seen how long the process might take or at what point a mandate to implement DNSSEC will be required for full adoption.

What are the implications for higher education?

The risks posed by DNS and the benefits of implementing DNSSEC have special significance for higher education. Colleges and universities are expected to be “good Internet citizens” and to lead by example in efforts to improve the public good. Because users tend to trust certain domains, including the .edu domain, more than others, expectations for the reliability of college and university websites are high. To the extent that institutions of higher education depend on their reputations, DNSSEC is an avenue to avoid some of the kinds of incidents that can damage a university’s stature.

In more tangible terms, higher education institutions store enormous amounts of sensitive information (including personal and financial information for students and others, medical information, and research data), and they maintain valuable online assets to which access must be effectively restricted. DNS attacks result in stolen passwords, disrupted e-mail (which often is the channel for official communications), exposure to malware, and other problems. DNSSEC can be an important part of a broad-based cybersecurity strategy.

(Reference : http://www.educause.edu)

Friday, January 14, 2011

Global IP Addresses




Have you ever wondered who controls the allocation of IP space? Globally routable IP addresses are assigned and distributed by Regional Internet Registrars (RIRs) to ISPs. The ISP then allocates smaller IP blocks to their clients as required.

Virtually all Internet users obtain their IP addresses from an ISP The 4 billion available IP addresses are administered by the Internet Assigned Numbers Authority (IANA, http://www.iana.org). IANA has divided this space into large subnets, usually /8 subnets with 16 million addresses each. These subnets are delegated to one of the five regional Internet registries (RIRs), which are given authority over large geographic areas.

The five RIRs are:

• African Network Information Centre (AfriNIC, http://www.afrinic.net/)
• Asia Pacific Network Information Centre (APNIC, http://www.apnic.net/)
• American Registry for Internet Numbers (ARIN, http://www.arin.net/)
• Regional Latin-American and Caribbean IP Address Registry (LACNIC, http://www.lacnic.net/)
• Réseaux IP Européens (RIPE NCC, http://www.ripe.net/)

Your ISP will assign globally routable IP address space to you from the pool allocated to it by your RIR. The registry system assures that IP addresses are not reused in any part of the network anywhere in the world. Once IP address assignments have been agreed upon, it is possible to pass packets between networks and participate in the global Internet. The process
of moving packets between networks is called routing.

(Reference: http://wndw.net)

Monday, January 10, 2011

Global Information Assurance Certification (GIAC) - The Only Hands-On Information Security Certification

In the information security industry, there are a multitude of information security certifications, but only GIAC (Global Information Assurance Certification) builds the true hands-on skills that go beyond theory and tests on the pragmatics of security administration, management, audit, and software security.

GIAC offers more than 20 specialized information security certifications that correspond to specific job duties. The family of GIAC certifications target job-based skill sets rather than taking a one-size fits all approach. The GIAC certification process validates the specific skills of security professionals and developers with standards established on the highest benchmarks in the industry.

(Reference: http://www.giac.org)