Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Confronting Complexity in a Cost-Sensitive World

Confronting Complexity in a Cost-Sensitive World

One of the most enjoyable parts of my job is traveling around the world and talking to CIOs about the many pressing challenges of managing today's heterogeneous IT infrastructure. It's clear to me that in today's difficult economy, it is not that CIOs are "not spending" money. They're just spending the money that they have more wisely.

These executives are willing to open their checkbooks - even when their budgets may be 10 to 30% tighter - for anything that has a measurable return on investment. Of course, the great price/performance ratio of the Intel Xeon and Itanium processors can be a big part of that focus on ROI and use of industry standards.

Weighing the Scales: Up and Out
A mega concern for CIOs today is how to scale their software and hardware infrastructure to improve responsiveness across the lines of business - like marketing, finance, and communications departments - without requiring a "forklift upgrade." When it comes to scalability, the choice is between scaling up or scaling out - a decision in which software developers can play a significant role.

Large-scale systems offer impressive processing power. Massive 32-way and 64-way symmetric multiprocessing (SMP) machines powered by Intel silicon will become increasingly common, and some manufacturers have plans to scale Intel-based systems to hundreds of processors in non-uniform memory architecture (NUMA) configurations.

These systems will deliver increased levels of transaction performance because of the sheer power of the hardware. A less obvious, but still important, contributor to delivering these staggering performance results is the software that partitions the workload so that each processor and server effectively performs its part of the job.

When an organization upgrades to newer servers, it requires active intelligence on the part of the IT infrastructure to share the additional resources. Adding servers or storage to accommodate more engineers requires software that monitors transactions and communications to get more out of the existing wired and wireless communications network infrastructure.

In addition to looking for help to scale up to accommodate growing databases or distributed applications, CIOs will probably ask software professionals to consider the financial impact of migrating to a new platform, including the licensing or maintenance fees.

Scale out is a different animal. Rather than an "every so often" overhaul, it's more of a step-and-repeat process. This is the more frequent occurrence where systems are clustered together, each one running an instance of an operating system that provides a compartmentalized service.

Web access and e-mail are great examples that illustrate the need for software that can increase service availability when servers are added to boost processing power.

Many of these types of high-performance clusters, with hundreds or thousands of nodes, have been interconnected in a scale-out fashion. The workload is partitioned such that it runs across all the nodes, no matter how geographically dispersed they are.

Confronting Complexity
CIOs are also concerned about finding automated methods of managing complexity and communicating with disparate platforms. Things were much easier when monolithic apps ran on big equipment and were monitored by guys who were within shouting distance if something went wrong. But now, the skateboard-riding IT guys may not even be in the same building, let alone on the same floor, so they've got to be able to diagnose and manage services remotely.

The intercommunication standards that work as part of RAS (remote access services) frequently enable this, which is great progress. Software that can provide automated communication across servers, laptops, and wireless devices drives reliability, availability, and serviceability leading not only to operating efficiencies but also to improved ROI.

Introducing Modularity
CIOs also talk about the difficulties in managing heterogeneous computing infrastructures that mix wired and wireless communications, and include a variety of hardware form factors and multiple operating environments. Employees are now distributed across a variety of devices and geographies, and the data and communications infrastructures are likewise spread out over distances.

The hardware components of these emerging modular environments can include rackable servers and blade servers. Blades are essentially circuit cards that contain processors and memory in a small form factor. These computing elements are bound together with a high-speed interconnect, such as InfiniBand Technology or Gigabit Ethernet.

The industry needs to deliver software and hardware that can easily administer and reallocate these communications and storage resources as flexible nodes.

This will require the different abstraction layers - from application servers to operating systems to Web services - to hide the complexity of a distributed computing hierarchy.

One formidable challenge will be to develop solutions that virtualize data through logical partitioning across the entire network. This flexible capacity is required to make efficient use of available resources, which increases ROI by centralizing management.

In addition to allowing databases to be maintained by just a few individuals, every company needs software that distributes and manages the data with a high degree of intelligence. I'm talking about software that is self-optimizing, self healing, and does automatic recovery. In the near future, I expect this software to do things that haven't even been dreamed of yet.

Conquering Complexity
The collision of the diversified platforms with the increasing distribution of resources has produced unprecedented levels of complexity. In fact, "complexity" is now almost a synonym for IT infrastructure.

I'm confident that developers will seize the chance to create comprehensive software that manages incredibly intricate environments while still retaining the simplicity to pacify those demanding users who won't leave their office until their e-mail account is working. Similarly, CEOs are still seeking a reliable way to synch their handhelds with the corporate databases - another opportunity for software developers.

Intel is dedicated to providing leadership in our silicon engineering as well as the overall Intel architecture. I look forward to watching the innovative things that developers will do with Intel's Hyper-Threading Technology, which is now available on desktop systems.

Intel will also continue to extend the Itanium processor family in 2003 and beyond. In addition, Intel continues to advance the Intel Xeon processor family for dual-processing and multi-processing systems.

Intel remains focused on working with the development community to devise solutions that will bring the IA ecosystem benefits of innovation and value to the constantly evolving enterprise.

SIDEBAR
Today's data center is comprised of a variety of applications, from front-end services to very large back-end databases. Each type of application taxes its server or servers differently, requiring data center administrators and architects to implement different server solutions for optimal performance, scalability, and availability.

Intel Xeon processor MP-based servers are ideal for mid-tier and back-end solutions such as BEA WebLogic, where significant processing power is needed. See www.intel.com/ebusiness/pdf/prod/server/xeon_mp/wp022401.pdf for how the Intel Xeon processor MP satisfies the demands of these high-end applications.

This enterprise infrastructure guide www.intel.com/ebusiness/pdf/affiliates/wp024202.pdf details the business and technical benefits of deploying BEA WebLogic Server* on Intel processor-based servers. It also offers design recommendations for a flexible, scalable three-tier WebLogic-based architecture, as well as best practices for migrating to a WebLogic/Intel platform. Proof points include:

  • Intel Solution Services conducted tests running WebLogic Server applications on clustered, 4-way, Intel processor-based platforms. The results show fully linear scalability from 4 to 20 processors, with balanced workloads across all nodes.
  • Testing WebLogic Server's scalability on the Intel Xeon processor MP showed that Intel's Hyper-Threading Technology (www.intel.com/ebusiness/products/server/benefits/ht/index.htm) can significantly boost performance for complex transactions.
  • The WebLogic JRocket Java Virtual Machine has been optimized for the Intel Xeon processor family to boost performance in multi-threaded applications. BEA has also released a version optimized for Intel Itanium architectures for the most demanding, high-end enterprise applications. ( www.bea.com/framework.jsp?CNT=pr00984.htm&FP=/content/ news_events/press_releases/2003)

    BEA WebLogic Server offers a highly flexible and scalable infrastructure solution. It also provides exceptional support for n-tier architecture, and integrates easily with legacy applications and platforms to deliver exceptional agility. For additional BEA resources on Intel.com, see www.intel.com/ids/bea.

  • More Stories By Mike Fister

    Mike Fister is Senior Vice President and General Manager of the Enterprise Platforms Group, which designs, markets, and supports building blocks for enterprise computing. Products delivered by the group are used in server and workstation platforms and include IA-32 and Itanium“ architecture processors, chipsets, boards/systems, and software tools and services.

    Prior to his current role, Fister managed Intel's IA-32 processor development organization where he was responsible for the design, development and marketing of IA-32 processors, including Pentium® Pro, Pentium II, Pentium III, Celeron®, Pentium II Xeon™ and Pentium III Xeon processors.

    Fister joined Intel in 1987 as the Chandler, Ariz. operations manager for the 8-bit focus group. In 1988, he was promoted to engineering manager for the application-specific integrated circuit group. He became the manager for the Arizona microcomputer engineering group in 1990. In 1991, he was promoted to general manager of the End User Components Division. Fister was appointed an Intel Vice President in 1996 and elected a corporate Vice President in 2000. He was promoted to Senior Vice President in 2002. Prior to joining Intel, Fister held executive and engineering management positions at Wyse, Machine Vision International and Cincinnati Milacron.

    Fister received his bachelor's and master's degrees in Electrical Engineering from the University of Cincinnati in 1977 and 1978, respectively.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    IoT & Smart Cities Stories
    The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
    Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
    The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
    René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
    Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
    Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
    Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
    Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
    Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
    As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...