BMC Software | Blogs https://s7280.pcdn.co Tue, 21 Jan 2025 16:45:24 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png BMC Software | Blogs https://s7280.pcdn.co 32 32 Building AI Applications: Insights Into Generative AI Architecture https://s7280.pcdn.co/architectural-approach-for-building-generative-ai-applications/ Tue, 21 Jan 2025 00:00:53 +0000 https://www.bmc.com/blogs/?p=53422 This is the second blog in my series following “Requirements for Building an Enterprise Generative AI Strategy,” where I highlighted the significant challenges and expectations of enterprise customers for generative AI, with detailed requirements for building a strategy. My recommendations centered on being grounded in enterprise knowledge, integrating references for trust and verifiability, ensuring answers […]]]>

This is the second blog in my series following “Requirements for Building an Enterprise Generative AI Strategy,” where I highlighted the significant challenges and expectations of enterprise customers for generative AI, with detailed requirements for building a strategy. My recommendations centered on being grounded in enterprise knowledge, integrating references for trust and verifiability, ensuring answers are based on user access control, and creating model flexibility.

In this blog, I introduce a reference ai model architecture designed specifically for generative AI applications, demonstrate how this architecture effectively addresses generative AI enterprise challenges around trust, data privacy, security, and large language model (LLM) agility, and provide a brief overview on LLM operations (or LLMOps). As a refresher, BMC HelixGPT is our approach to generative AI integrated across the BMC Helix for ServiceOps platform.

Generative AI reference architecture

An application architecture describes the behavior of applications used in a business, focused on how they interact with each other and users. It is focused on the data consumed and produced by applications, rather than their internal structure. The industry has long recognized three prominent AI design patterns to build generative AI applications:

  • Prompt engineering
  • Retrieval augmented generation (RAG)
  • Fine tuning pipelines

Instead of debating which approach is better, BMC HelixGPT seamlessly integrates all three.

The generative ai architecture diagram below shows our BMC HelixGPT application reference architecture for generative AI. The architecture consists of several layers: API plug-ins, prompt library, vector data source ingestion, access processing control, model-training pipeline, and assessment layer to assess hallucination/telemetry/evaluations, a “bring your own model” embedding layer, and an LLM orchestration layer. BMC HelixGPT extensively uses LangChain as the engine to orchestrate and trigger LLM chains.

Reference Architecture for GenAI

Reference Architecture for GenAI

 

The BMC HelixGPT proprietary generative AI technology, combined with LangChain open source models, provide “bring your own model” flexibility for our customers. There are also retrieval plug-ins, access control plug-ins, and API plug-ins that integrate into enterprise systems. Like the holistic design explained in this June 2023 blog by Andressen Horowitz, we have three main flows:

  1. Data ingestion and training flow: Data is read from multiple data stores, preprocessed, chunked, and trained through an embedding model (RAG) and training pipeline (fine-tuning). VectorDB stores the chunked document embeddings that allow for better semantic, similarity-based data retrievals.
  2. Prompt augmentation using data retrieval: Once a user query arrives at the API layer, the prompt is selected, followed by data retrievals through VectorDB or API plug-ins to get the right contextual data before the prompt is passed to the LLM layer.
  3. LLM inference: This is where there is a choice to use general purpose foundation models from OpenAI, Azure GPT models, or the self-hosted foundation model in BMC HelixGPT. Fine-tuned models are used when tuned for a specific task or use case. The response is evaluated for accuracy and other metrics, including hallucinations.

Now, let us look at how this reference architecture addresses the challenges of generative AI for enterprises and facilitates the rapid development of generative AI applications.

Overcoming common enterprise challenges with generative AI deployments

While the promise of generative AI systems are considerable, businesses face real challenges in deploying them. In thinking about what the challenges of generative AI architecture are, you will find that some are technical and operational, while others are ethical and societal, with a strong element of emotion thrown in.

Enterprise versus world knowledge: accuracy and trust

Enterprises seek answers across diverse internal and external enterprise data sources such as articles, web pages, how-to guides, and more. Further, data can be contained in both unstructured and structured databases. BMC HelixGPT ingests, chunks, and embeds these sources through LangChain data loaders using embedding transformer-based models. LangChain provides a rich set of document loaders that it leverages. When a user question is received, we augment the prompts with document retrievals from VectorDB or APIs and use the LLM’s in-context learning to generate a response. This method anchors the LLM’s response to the retrieved documents or data, reducing the risk of hallucinations. BMC HelixGPT also provides the retrieved documents as citations, allowing users to verify the responses. To realize this advanced capability, our strategy integrates various LangChain capabilities, such as retrieval QA chains with sources and conversation history chains.

Access control, security, and data privacy

During the retrieval of document flow, BMC HelixGPT validates that the user has access permissions to read the documents and removes those documents from the prompt context that the user doesn’t have access to. This ensures that LLM-generated answers are always from only those documents that a user has read access to. Hence, the same question will generate two different answers aligned to the user’s role and permission model.

Model flexibility and security

The BMC HelixGPT reference architecture is based on a model abstraction layer that LangChain provides. This capability enables seamless integration of foundational general-purpose models, whether hosted or behind APIs such as OpenAI and Azure or open-source models running in customers’ centers. There are over 50 connectors to different model providers in LangChain, making it easy to add new providers or models modularly. Customers who prioritize data security have the option to host and run a foundational model in the datacenter. This model architecture caters to diverse enterprise customers and prevents vendor lock-in, including implementations that provide the strongest privacy and security guarantees.

An Introduction to LLMOps

Machine learning for IT operations (MLOps) for LLM is called LLMOps. LLMOps is a new set of tools and best practices to manage the lifecycle of LLM-powered applications, including data management, model management, and LLM training, monitoring and governance LLMOps is the driving force to build generative AI applications for BMC HelixGPT.

BMC HelixGPT is a platform that provides models and services that allow applications to harness the power of generative AI. It also provides LLMOps foundational services such as prompt engineering and RAG to power a spectrum of use cases ranging from summarization to content generation and conversations.

LLMOps is distinct from MLOps because it introduces three new paradigms for training LLMs:

  • Prompt engineering
  • Retrieval augmented generation
  • Fine-tuning pipelines

My third and final installment in this blog series will dive deeper into BMC HelixGPT’s LLMOps capabilities.

How to select a foundation AI model architecture

Selecting a foundation AI model architecture is a process of thinking through various factors and coming up with answers specific to your organization and situation. The list below covers some of the most important aspects of this decision.

1. Objectives and project needs

Consider the model’s ability to achieve the goals of your initiative and one that can handle the various tasks involved. BMC HelixGPT applies generative AI to service operations intelligence.

 2. Cost considerations

How will the model fit your budget? Assess your options beyond the initial capital expense, to include maintenance and ongoing operational expenses. At BMC, we encourage you to calculate the ROI, not just the costs.

3. Privacy and cybersecurity

Security and compliance with laws and regulations around data privacy are vital. Investigate the capabilities of each model in handling confidential and sensitive information. BMC HelixGPT uses full encryption for data at rest and data in motion, role-based access, and is hosted at secure data centers around the world.

4. Ability to tailor to your needs

What tasks can be covered with off-the-shelf capabilities? Will you need customization, or the ability to modify and adjust the model? Choosing a configurable model like BMC HelixGPT allows you to tailor the model to your needs.

5. User support and community engagement

The strength of support you get from a vendor and an active, engaged community can add tremendous value. BMC HelixGPT, for example, includes full and accessible documentation, 24/7 world-class assistance, and educational content in the form of web-based training and self-service video tutorials. BMC also hosts an active community group where experts, enthusiasts, and users share ideas and help each other solve problems.

6. Ability to scale

As your business grows, the amount of data you collect grows, too, along with the complexity of the tasks your model must perform. A scalable model, like BMC HelixGPT, builds in extensibility with its cloud-native architecture, elastic resources, support of multi-tenancy, and the ability to integrate within your IT ecosystem with customizable workflows.

7. Compliance and ethical considerations

Keep in mind that you will need to make legal and ethical decisions, staying aware of potential biases in the model and your data. Beyond legal compliance requirements, you will want to consider ethical questions and build in safeguards that reflect your values.

8. Talent and outside resource availability

Ideally, you want to ensure you can find people with the skills to support your foundation AI model architecture. Talent is readily available for open-source models, but quality can be spotty. Closed-source, ready made solutions generally come with high-quality experts, but from a more limited talent pool. Training and skill development will need to be a priority in either case.

9. Long-term viability

Assess the vendor investment in development and support for any given model. You don’t want to be stuck with an orphaned platform. Given the pace of innovation, working with a company like BMC, which has a commitment over the long term, is protection for the viability of your investment.

10. Integration with existing systems

Make the most of your investment in your current infrastructure and avoid disrupting current workflows by choosing a foundation AI model architecture that integrates with your operational environment, like BMC HelixGPT. Ideally, your generative AI platform will enhance what you do, without throwing everything into disarray.

]]>
What Is DBMS (Database Management System)? https://www.bmc.com/blogs/dbms-database-management-systems/ Tue, 21 Jan 2025 00:00:24 +0000 https://www.bmc.com/blogs/?p=12564 Data is the cornerstone of any modern software application, and databases are the most common way to store and manage data used by applications. With the explosion of web and cloud technologies, databases have evolved from traditional relational databases to more advanced types of databases such as NoSQL, columnar, key-value, hierarchical, and distributed databases. Each […]]]>

Data is the cornerstone of any modern software application, and databases are the most common way to store and manage data used by applications.

With the explosion of web and cloud technologies, databases have evolved from traditional relational databases to more advanced types of databases such as NoSQL, columnar, key-value, hierarchical, and distributed databases. Each type has the ability to handle structured, semi-structured, and even unstructured data.

On top of that, databases are continuously handling mission-critical and sensitive data. When this is coupled with compliance requirements and the distributed nature of most data sets, managing databases has become highly complex. As a result, organizations require robust, secure, and user-friendly tools to maintain these databases.

This is where database management systems come into play—by offering a platform to manage databases. Let’s take a look.

Introduction of DBMS

What is a database management system (DBMS)?

What is a database management system or DBMS?

A database management system (DBMS) is a software tool for creating, managing, and reading a database. With DBMS, users can access and interact with the underlying data in the database. These actions can range from simply querying data to defining database schemas that fundamentally affect the structure of DBMS.

Furthermore, DBMS allows users to interact with a database securely and concurrently without interfering with each user and while maintaining data integrity.


Unlock the potential of IT Service Management with BMC Helix ITSM. ›

What are the functions of DBMS?

Database tasks in DBMS.

The typical DBMS tasks or functions include:

  • User access and control. Administrators can easily configure user accounts, define access policies, modify restrictions and access scopes to limit access to underlying data, control user actions, and manage database users.
  • Data backups and snapshots. DBMS can simplify the database backup process through a simpler and straightforward interface for managing backups and snapshots. For safekeeping, users can move these backups to third-party locations, such as cloud storage.
  • Performance tuning. DBMS can monitor database performance using integrated tools. Users can tune databases by creating optimized indexes to reduce I/O usage and optimize SQL queries for the best database performance.
  • Data recovery. DBMS provides a recovery platform and the necessary tools to fully or partially restore databases to their previous state—effortlessly.
  • Database query language and APIs. Access and use data via a variety of query languages and API connections.
  • Data dictionary management. Dictionaries include metadata about the structure of the data and relationships between data points so that functionality can rely on structural abstractions rather than complex coding.
  • Data transformation and display. DBMS transforms data on command, such as assembling attributes for the month, day and year as December 14, 2024, or 12/14/24 or another specified display format.
  • Management of data integrity. DBMS establishes and maintains data consistency and minimizes duplications.
  • User access. This policy permits more than one user to access the database at a time and follows ACID to accommodate multiple users.
  • User interface. Whether accessing data through a web form, a direct dashboard, or a third-party distributed network, a browser-based interface makes it easy.

All these administrative tasks are facilitated using a single management interface. Most modern DBMS support handling multiple database workloads from a centralized DBMS software, even in a distributed database scenario. Furthermore, they allow organizations to have a governable top-down view of all the data, users, groups, locations, etc., in an organized manner.

(Explore the role of DBAs, or database administrators.)

How does DBMS work?

The various DBMS components work together to create an integrated system for structuring and storing data, supporting user queries and access, ensuring consistency and integrity, control, security, backups, and logging.

The following DBMS schematic illustrates how a DBMS system works:

Components of DBMS include the storage engine, database language and more.

What are the components of a DBMS?

All DBMS comes with various integrated components and tools necessary to carry out almost all database management tasks. Some DBMS software even provides the ability to extend beyond the core functionality by integrating with third-party tools and services, directly or via plugins.

In this section, we will look at the common components of a DBMS that are universal across all database software:

  1. Storage engine
  2. Database query language
  3. Query processor
  4. Optimization engine
  5. Metadata catalog
  6. Log manager
  7. Reporting and monitoring tools
  8. Data utilities

Components of DBMS include the storage engine, database language and more.

1. Storage engine in a database

The database storage engine is the core component of the DBMS that interacts with the file system at an OS level to store data. All SQL queries which interact with the underlying data go through the storage engine.

Which storage engine is the best for a database?

The right storage engine depends on your data model. SQL engines supporting transactions work well with relational databases. Non-relational models, especially those that require scalability, work best with MongoDB or Cassandra.

2. Database query language

What is a database access language? A database access language is required for interacting with a database, from creating databases to simply inserting or retrieving data. A proper DBMS must support one or multiple query languages and language dialects. Structured query language (SQL) and MongoDB Query Language (MQL) are two query languages that are used to interact with the databases.

What are the 4 types of DBMS languages?

In many query languages, the query language functionality can be further categorized according to specific tasks:

  • Data Definition Language (DDL). This consists of commands that can be used to define database schemas or modify the structure of database objects.
  • Data Manipulation Language (DML). Commands that directly deal with the data in the database. All CRUD operations come under DML.
  • Data Control Language (DCL). This deals with the permissions and other access controls of the database.
  • Transaction Control Language (TCL). Command which deals with internal database transactions.

3. Query processor

The query processor is the intermediary between user queries and the database. In DBMS, query processing is the process of interpreting user queries, such as SQL, and making them actionable commands that the database can understand to perform the appropriate functionality.

What are the components of the query processor?

The query processor components each work together to extract data.

  • Parser. This component translates a user query into a database language such as SQL, parses it for correct syntax, and verifies its logical meaning.
  • Optimizer. This component converts the query into logical relational operations, identifies how much time and energy it will take to execute the query, and then specifies the exact operations and sequence for the most efficient execution.
  • Execution engine. This is the component that carries out the query, implements algorithms and operators according to the optimized plan, and finally retrieves and formats the results.
  • Query cache. Some systems include a component that stores frequently executed queries and results to save time and improve performance.

4. Optimization engine in DBMS

The optimization engine allows the DBMS to provide insights into the performance of the database in terms of optimizing the database itself and queries. When coupled with database monitoring tools, it can provide a powerful toolset to gain the best performance out of the database.

5. Metadata catalog

A metadata catalog, also referred to as a data catalog, is the centralized catalog of all the objects within the database. When an object is created, the DBMS keeps a record of that object with some metadata about it using the metadata catalog. Then, this record can be used to:

  • Verify user requests to the appropriate database objects
  • Provide an overview of the complete database structure

6. Log manager

The log manager is a component that will keep all the logs of the DBMS. These logs will consist of user logins and activity, database functions, backups and restore functions, etc. The log manager ensures all these logs are properly recorded and easily accessible.

(Compare logs to monitoring.)

7. Reporting & monitoring tools

Reporting and monitoring tools are another standard component that comes with a DBMS. DBMS reporting tools will enable users to generate reports while monitoring tools enable monitoring the databases for resource consumption, user activity, etc.

8. Data utilities

In addition to all the above, most DBMS software comes with additional inbuilt utilities to provide functionality such as:

  • Data integrity checks
  • Backup and restore
  • Simple database repair
  • Data validations
  • Etc.

Scale operational effectiveness with an artificial intelligence for IT operations. Learn more about AIOps with BMC! ›

What are the different types of DBMS?

The evolution of data models, how data is structured, and the use cases of each has led to various types of DBMS. The most commonly used are:

1. Relational database management systems (RDBMS)

Relational Database Management Systems are the most common type of DBMS. Relational databases interact with databases that contain structured data in a table format with predefined relationships. Moreover, they use structured query language (SQL) to interact with databases. Some popular examples of RDBMS include:

  • Microsoft SQL
  • MySQL
  • Oracle Database
  • MariaDB
  • PostgreSQL

2. NoSQL databases

NoSQL (nonrelational) databases are designed for semi-structured and unstructured data. They offer greater data modeling flexibility and often don’t use a schema. They also support scaling across distributed systems.

Examples of nonrelational or NoSQL databases include:

3. Object-oriented DBMS (OODBMS)

This type of database stores data and data relationships as objects that can be used by object-oriented programming languages like C++ and Java in applications such as CAD systems, databases containing scientific research, and multimedia.

Examples of object-oriented databases include:

  • ObjectDB
  • Versant
  • GemStone/S
  • Objectivity/DB

4. Hierarchical DBMS

This type of database uses tree-like structures to organize data in parent-child relationships. A parent node can have many children and grandchildren, but each child node has only one parent. These DBMSs work well when data has well-defined relationships that can be organized into files and directories.

Examples of hierarchical databases include:

  • IBM Information Management System (IMS)
  • RDM Mobile
  • Windows Registry
  • XML data storage

5. Network DBMS

This type of database supports complex interconnections in many-to-many data relationships, with records that have multiple and complex links.

Examples of databases that use the network model include:

  • IDMS (Integrated Database Management System)
  • Oracle CODASYL

6. Columnar database management systems (CDBMS)

As the name suggests, CDMBS is used to manage columnar databases that store data in columns instead of rows, emphasizing high performance. Some databases that use columnar format are Apache Cassandra, Apache HBase, etc.

What are the advantages of DBMS?

DBMS was introduced to solve the fundamental issues associated with storing, managing, accessing, securing, and auditing data in traditional file systems. Software users and organizations can gain the following advantages of DBMS:

1. Increased data security

DBMS provides the ability to control users and enforce policies for security and compliance management. This controlled user access increases the database security and makes the data less vulnerable to security breaches.

2. Simple data sharing

DBMS enables users to access the database securely regardless of their location. Thus, they can handle any database-related task promptly without the need for complex access methods or worrying about database security. On top of that, DBMS allows multiple users to collaborate effectively when interacting with the database.

3. Data integration

DBMS allows users to gain a centralized view of databases spread across multiple locations and manage them using a single interface rather than operating them as separate entities.

4. Abstraction & independence

DBMS enables users to change the physical schema of a database without changing the logical schema that governs database relationships. As a result, organizations can scale the underlying database infrastructure without affecting the database operations.

Furthermore, any change to the logical schema can also be carried out without affecting applications that access the databases.

5. Streamlined backup & recovery mechanism

Most databases have built-in backup and recovery tools. Yet, DBMS offers centralized tools to facilitate backup and recovery functionality more conveniently and thereby provide a better user experience. Securing data has become easier than ever with functionality like:

  • Automated snapshots
  • Backup scheduling
  • Backup verifications
  • Multiple recovery methods

6. Uniform management & monitoring

DBMS provides a single interface to carry out all the management and monitoring tasks, thus simplifying the workload of database administrators. These tasks can range from database creation and schema modifications to reporting and auditing.

Why is DBMS important?

Considering the many advantages, DBMS is essential for any organization when managing databases. With different DBMS providing different feature sets, it is paramount that organizations rigorously evaluate the DBMS software before committing to a single system. However, a properly configured DBMS will greatly simplify the management and maintenance of databases at any scale. The scale, complexity, and feature set of a DBMS will depend on the specific DBMS and the organization’s requirements.

Related reading

]]>
Help Desk Automation: A Beginner’s Guide https://www.bmc.com/blogs/automate-help-desk-processes/ Tue, 21 Jan 2025 00:00:20 +0000 http://www.bmc.com/blogs/?p=9175 If your help desk is not employing every possible option for automation that’s available, you are missing out on opportunities to: Improve productivity Increase customer satisfaction With readily accessible tools to automate common, repeatable tasks undertaken by the help desk, I find myself shaking my head when I see so many—particularly in small to medium […]]]>

If your help desk is not employing every possible option for automation that’s available, you are missing out on opportunities to:

  • Improve productivity
  • Increase customer satisfaction

With readily accessible tools to automate common, repeatable tasks undertaken by the help desk, I find myself shaking my head when I see so many—particularly in small to medium businesses—that have still not taken advantage of available technology solutions.

Here, I’ll introduce you to help desk automation, from why and how to start building your investment case to choosing automation tools.

What is help desk automation?

Automating your help desk means using software and artificial intelligence (AI) to support customers with their questions or problems. Automated help desks use technology to streamline workflows by taking care of manual tasks and systems work, making it possible to provide a faster, more helpful response to every customer, at a lower cost.

Your help desk’s current state

Do you still manually reset passwords and unlock accounts? Do you spend time assigning access rights to folders and applications? Do you manually assign and prioritise incidents and service requests?

If you do, I have one question: Why?

Help Desk Without Automation

Automating these tasks removes the drudgery from your help desk analysts and allows them to spend more time resolving more complex queries, meaning they can get customers back to work faster as well as giving them far better job satisfaction. Nobody wants to spend their days clicking on ‘unlock account’ buttons and listening to customers explain why they have forgotten their password for the fifth time this year.

Why do organizations—of all sizes—continue to use manual processes on their help desks?

The answer you will often get is that these tools are expensive, and they cannot justify the investment. Unfortunately for them, that is not a valid argument. The outlay required to automate basic tasks has a rapid return on investment (ROI).

Every day that your help desk depends on manual workflows, you risk losing:

  • User satisfaction, confidence, trust
  • Time that could be spent on direct assistance, not admin drudgery
  • Energy that could be directed toward learning and skills improvement
  • Issues and projects that fall through the cracks
  • Knowledge and fixes that are not shared with the team
  • Market reputation

Ultimately, all these inefficiencies cost your company money and your well-earned reputation.

Prioritize automation

If you want to improve the value your help desk offers to the business, then automation needs to be at the top of your ‘to do’ list. Why? It will allow you to:

  • Use your team’s skills more effectively
  • Get a grip on heavy or spiking workloads
  • Accelerate issue resolution to please users
  • Look like a help desk hero to management

Automation can help your team stay lean while handling larger volumes of tickets with ease. In other words, automation helps you work smarter, not harder.

Benefits of help desk automation to build your case

There are many benefits from employing automation on your help desk. These five benefits, however, stand out for their clarity in convincing top leadership that you need to prioritize automation on the help desk—ASAP.

Help Desk With Automation

1. Faster response times

Help desk teams are judged by how quickly they respond to and resolve issues. Being able to quickly resolve issues will greatly improve customer satisfaction.

Workflow automation and orchestration can significantly improve incident response times and problem resolution by incorporating predetermined if, then capabilities. In other words, if you receive a request like this, then do this specific task. For instance, if the help desk receives a call about a printer problem, workflow automation can:

  • Determine the type of call.
  • Route it to the right person.
  • Send an automated response to the user.

Most automation systems also provide some way of allowing users to submit their own tickets via a web portal or email, thus reducing calls to the help desk while speeding up routing—a two for one benefit.

2. More accurate reporting

Automating workflows improves the accuracy of help desk statistics by avoiding human errors and inconsistencies in data entry. It also relieves managers of the manual work required to collect data and correct errors. Field defaults, required fields, and auto-routing rules are all tools to ensure every incident reported is handled properly and in precisely the same way.

Understanding how your help desk is currently performing will give you the information you need to plan improvements.

3. Improved user communication

One thing that annoys customers, perhaps more than anything else, is a lack of communication. When help desk analysts are busy trying to resolve incidents or fulfill service requests, it is easy to neglect customer communication.

Automating standard customer communications will mean that you can keep customers informed with little or no effort required from support staff. Automating call notifications, SLA escalations, and call resolution emails will:

  • Ensure that your customers understand what is happening with their calls
  • Help to manage their expectations

 4. Skyrocketing productivity

When you can resolve incidents faster and fulfill requests more efficiently, users get back to work faster. This increases productivity and ultimately improves the bottom line for the business.

5. Increased staff satisfaction

Automation of mundane, repeated tasks frees up help desk team members to concentrate on more challenging work. This will increase their job satisfaction and motivation, reducing the cost of higher turnover.

Choosing help desk automation solutions

Help desk software solutions offer powerful help desk automation functionality based on customizable business rules. When it comes to automation in help desk software, make sure you’re getting these capabilities:

  • Automatically capturing and logging all incoming requests
  • Automatically assigning each issue to a specific help desk technician (or group of technicians) based on skill routing
  • Automatically notifying the technician that a new task has been assigned
  • Automatically prioritizing issues according to rules (i.e. severity, system, person reporting)
  • Automatically applying due dates and routing based on configurable service level agreements (SLAs)
  • Providing tools to document successful fixes for issues so they can be leveraged later
  • Creating and automating workflows to deal with processes such as user onboarding
  • Documenting communication with the user
  • Automatically notifying users of issue resolution or escalation
  • Automatically surveying users after their issue is resolved to gauge satisfaction levels
  • Generating reports based on issue-related and service-related metrics and automatically sending these to stakeholders

Additional automation will allow you to process simple requests without any human intervention, such as:

  • Password resets
  • Folder creation
  • Permissions

Service desk automation software

The purpose of service desk automation software is to optimize across the three primary aspects of helping customers:

  • To provide consistently high-quality help that satisfies customers.
  • To provide that help in a streamlined way that makes the best use of resources and scales with the business.
  • To achieve these aims in the most efficient way, in terms of cost, manpower, and time.

Companies benefit from IT service desk automation in a multitude of ways:

  • Reduced customer wait time, for faster help
  • Improved productivity of service desk personnel, relieving them of menial, repetitive work so they can focus on tasks with higher values
  • Reduced errors and improved consistency, with reliable compliance with regulations and standards
  • Increased service availability to customers, for help when they need it, not just during office hours
  • Personalized responses based on past customer interactions
  • The ability to scale up and down with the needs of the business
  • Captured metrics for insights that can improve performance, track trends, and uncover patterns, surfacing issues and areas where service can be elevated
  • Lowered overall costs
  • Better service to customers, to improve satisfaction

The impact on your business can be considerable. With enhanced efficiency and lower costs, you can reallocate resources to fuel growth, innovation, and profit. When you resolve customer issues speedily and with less effort on their part, you create a better experience and enhance your brand. When you have detailed information about types of issues, volume of issues, resolution times, and customer satisfaction, you can better manage customer service, product development, and overall operations. It all adds up to a competitive advantage.

For the most complete features and cutting-edge technology in service desk automation software, consider BMC Helix ITSM. We support you through onboarding and migrating systems. You will be able to integrate with over 650 partners, and can always rely on our support hub for ongoing help to ensure you get the most from your system.

No question on automation

If you can automate, you should automate your help desk. Initial investment for technology solutions to facilitate automation will quickly be returned by way of improved efficiencies, reduction in errors, increased business productivity, and higher levels of customer satisfaction.

Additional resources

For more on this topic, explore these resources:

]]>
Top Mainframe Priorities for Banking, Financial Services, and Insurance Firms https://www.bmc.com/blogs/mainframe-priorities-banking-financial-services-insurance/ Thu, 16 Jan 2025 16:26:14 +0000 https://www.bmc.com/blogs/?p=54543 Banking, financial services, and insurance (BFSI) organizations face increased pressure to deliver secure, reliable, and efficient services at scale. For decades, these organizations have relied on the power of the mainframe as a core system to manage and process enormous volumes of transactions with unparalleled stability and scalability. The mainframe’s ability to handle large-scale data […]]]>

Banking, financial services, and insurance (BFSI) organizations face increased pressure to deliver secure, reliable, and efficient services at scale. For decades, these organizations have relied on the power of the mainframe as a core system to manage and process enormous volumes of transactions with unparalleled stability and scalability. The mainframe’s ability to handle large-scale data workloads with high levels of security and uptime makes it a backbone technology in the financial services industry, supporting everything from real-time transaction processing to compliance monitoring and risk management. All this while consumers have shifted from in-person to web to mobile transactions, and now to artificial intelligence (AI).

With the rise of AI and new customer expectations comes a new balancing act: financial services (FinServ) organizations are embracing new challenges and once again looking to ensure that their core systems stay cutting-edge while the “face” they present to their customers continues to change. Wall Street has always been an early adopter of technology and a large spender, and today is no different. Regulatory requirements are tightening, operational costs are under scrutiny, and a retiring mainframe workforce poses staffing challenges. Many organizations are also working to modernize core, business-critical applications, but they often lack the modern tooling specifically designed for the mainframe.

This gap in tooling can slow modernization efforts and hinder integration with newer digital environments, making it difficult for FinServ firms to keep pace with the demands of a digitally transformed industry. These challenges underscore the importance of having a clear set of priorities to guide mainframe strategy.

Strategic Insights: Investment in the mainframe platform and MIPS growth expectations

According to data from the 2024 BMC Mainframe Survey, FinServ organizations show varied approaches in their mainframe investment strategies:

  • Increased investment (42 percent): A substantial portion of FinServ firms have increased their investment in mainframe platforms, demonstrating a commitment to leveraging its capabilities in handling critical functions and confidence in its ability to support the complex requirements of modern BFSI operations.
  • Maintained investment (47 percent): Nearly half of organizations in the financial sector have chosen to maintain their mainframe investment levels. This trend suggests a focus on optimizing current resources rather than expansion, indicating that many organizations see the mainframe as a stable, necessary asset in their infrastructure.

Additionally, when examining million instructions per second (MIPS) growth expectations—a key indicator of mainframe processing demand—leaders in the financial services industry are cautiously optimistic:

  • Growth (46 percent): Close to half of survey respondents from FinServ firms anticipate an increase in MIPS usage over the next 12 months. This expected growth highlights the ongoing reliance on mainframes for high-volume processing and critical application support.
  • Flat expectations (23 percent): Some organizations expect MIPS usage to remain steady, reflecting a more conservative approach to capacity planning. These organizations may be focused on enhancing efficiency within current capacity limits rather than expanding.

These investment and growth insights underscore the mainframe’s critical role in the FinServ industry, with many organizations viewing it as an essential, stable platform while seeking ways to maximize its value.

Top mainframe priorities for the financial services industry in 2024

The 2024 data reveal several strategic priorities specific to how BFSI organizations are investing in the mainframe. When adjusting to review only those organizations’ responses, the following key priorities come into focus:

  • Compliance and security (63 percent): Compliance and security lead the list of priorities, reflecting the critical role of mainframes in protecting sensitive financial data and ensuring regulatory adherence. As FinServ organizations face increasing scrutiny and evolving cyber threats, enhancing security measures and ensuring compliance is essential.
  • Cost optimization (50 percent): Managing costs remains a high priority. FinServ firms are focused on balancing IT business as usual costs with the need to support growth of system usage and innovation in new workloads. Cost-optimization strategies may include improving efficiency and implementing cost-saving technologies to ensure the mainframe remains a viable asset.
  • Application modernization (48 percent): Modernizing core applications is a top priority for the financial services industry. Updating these applications allows firms to enhance functionality, improve integration with newer technologies, and support digital transformation efforts. This modernization process is crucial for maintaining relevance in a rapidly changing financial landscape.
  • Staffing and skills (45 percent): Nearly half of BFSI organizations cite staffing and skills as a priority, underscoring the industry’s talent gap. With an aging mainframe workforce, there’s a pressing need to develop and recruit talent with mainframe expertise. Ensuring continuity and knowledge transfer is essential to sustaining mainframe operations and supporting future modernization efforts.
  • Enhancing automation (38 percent): Automation remains a significant focus, suggesting that organizations are looking to streamline operations and reduce manual interventions in their mainframe environments. Automation can improve efficiency, reduce errors, and allow staff to focus on higher-value tasks.
  • AIOps/operational analytics (35 percent): Interest in AIOps and operational analytics indicates a trend towards using AI-driven insights to monitor and optimize mainframe performance. This can help FinServ firms improve uptime, preemptively address issues, and optimize operational workflows.

AIOps: Bridging the skills gap but facing complexity

AIOps has emerged as a vital solution for BFSI organizations to address the mainframe skills gap, with twice as many BFSI respondents indicating that AIOps allowed them to reduce the skill level of staff year over year compared to 2023. By automating routine tasks and operational processes, AIOps helps teams with lower skill levels effectively manage mainframe operations, a critical advantage for organizations grappling with a retiring workforce and challenges in attracting new talent. However, 35 percent cite complexity as the biggest challenge of implementing AIOps for the mainframe as highly integrated systems and hybrid environments often make it difficult to interpret and act on insights generated by AIOps platforms.

To tackle this complexity, BFSI organizations are increasingly turning to generative AI (GenAI), with 37 percent planning to implement it alongside AIOps. GenAI complements AIOps by interpreting complex data insights, providing actionable recommendations, and reducing the cognitive load on IT teams. It transforms AIOps outcomes into easily understandable insights, enabling faster and more informed decision-making. Together, AIOps and GenAI form a powerful hybrid AI strategy, addressing both operational efficiency and interpretability, allowing financial organizations to unlock the full potential of their investments in automation and stay competitive in an evolving market.

The push toward cloud integration

The survey also found that data transformation is reshaping how BFSI organizations manage and protect their mainframe data, with 46 percent of those prioritizing cloud technologies also considering connecting their mainframe to cloud-based workloads. In comparison, only 30 percent of organizations in other industries share this focus, highlighting the BFSI sector’s leadership in hybrid cloud adoption.

This shift reflects a growing trend of integrating mainframes with cloud-based workloads to retain the reliability and security of the mainframe while leveraging the scalability and flexibility of the cloud. By bridging these two environments, BFSI organizations are positioning themselves to unlock greater agility and responsiveness to changing market demands.

Benefits of cloud-enabled mainframe data management

The transition to cloud-enabled mainframe data management offers FinServ firms a range of strategic benefits:

  • Cost efficiency: Moving data from tapes and virtual tape libraries (VTL) to the cloud reduces high storage and maintenance costs.
  • Agility and scalability: Cloud storage enables faster data recovery, better access to archived data, and the ability to scale storage as business needs grow.
  • Enhanced data protection: Cloud data solutions provide advanced encryption and replication, improving data security and compliance in the highly regulated financial sector.

Cloud-enabled data management not only reduces reliance on outdated storage systems but also supports innovation by integrating seamlessly with modern applications. This strategic shift provides BFSI firms with the flexibility they need to address evolving regulatory and market demands while reducing costs and increasing operational agility.

Implications for financial services organizations

The trends in mainframe priorities highlight a careful balance BFSI groups are striking between maintaining the reliability of core systems and preparing for the future. The sustained investment in compliance, security, and application modernization shows a clear understanding of the mainframe’s critical role in meeting regulatory requirements and supporting innovation. At the same time, the focus on cost optimization reflects an awareness of the need to manage operational expenses carefully in an unpredictable economic climate.

The emphasis on staffing and skills development reveals an urgent need to address the workforce challenges posed by an aging mainframe workforce. FinServ organizations that invest in training programs and succession planning will be better positioned to retain expertise and ensure continuity as experienced professionals retire.

Additionally, the growing interest in automation and AIOps demonstrates that BFSI organizations are looking to technology to enhance efficiency and performance. By leveraging automation and AI-driven insights, these organizations can optimize mainframe operations, reduce costs, and preemptively address performance issues. This approach improves operational resilience while also aligning with the broader digital transformation goals that many BFSI organizations are pursuing.

In conclusion, the mainframe remains a vital component of operations in the BFSI sector, providing the reliability, security, and processing power that modern financial services demand. Whether through integrating cloud-based workloads or modernizing core applications, FinServ leaders are embracing innovation to ensure their organizations remain competitive, resilient, and future-ready. To support this transformation, BMC AMI solutions offer powerful tools to help BFSI organizations simplify mainframe management optimize operations, enhance efficiency, and confidently navigate their modernization journey.

To see the complete results of the 2024 BMC Mainframe Survey, visit bmc.com/mainframesurvey.

]]>
Last-Minute Checklist for DORA Compliance: Critical Considerations for Your Mainframe https://www.bmc.com/blogs/last-minute-dora-mainframe-checklist/ Thu, 16 Jan 2025 16:22:16 +0000 https://www.bmc.com/blogs/?p=54545 With the January 17 deadline for compliance with the Digital Operational Resilience Act (DORA) upon us, banking and financial firms across Europe are scrambling to finalize their preparations. While many organizations have spent the past months or even years aligning their systems and processes with DORA’s requirements, some critical elements—such as mainframe systems—may still require […]]]>

With the January 17 deadline for compliance with the Digital Operational Resilience Act (DORA) upon us, banking and financial firms across Europe are scrambling to finalize their preparations. While many organizations have spent the past months or even years aligning their systems and processes with DORA’s requirements, some critical elements—such as mainframe systems—may still require attention. Given that mainframes power many of the financial sector’s core operations, ensuring their compliance with DORA is not just a necessity, but a strategic imperative.

This article provides a comprehensive last-minute checklist for mainframe-related DORA compliance. Whether you’re confident in your preparedness or seeking to confirm nothing has been overlooked, these steps will help you ensure that your mainframe systems are ready to meet the stringent requirements of DORA.

Focus areas of DORA for the mainframe

DORA’s core principles emphasize the need for financial institutions to understand their entire IT landscape, including their third-party service suppliers, while identifying potential vulnerabilities and implementing robust, automated strategies to protect systems, data, and customers from cyberthreats and disruptions.

While DORA focuses broadly on information and communication technology (ICT) systems, third-party risk management, incident reporting, resilience testing, and information sharing, firms with mainframe systems must address some unique considerations such as:

Service awareness and availability

  • Conduct regular health checks, automate maintenance tasks, and implement predictive alarms based on workload patterns.
  • Enhance visibility into mainframe activities through robust logging mechanisms that meet DORA’s transparency requirements.
  • Adopt proactive strategies for identifying and addressing potential issues to maintain system availability and meet accountability standards.

Risk management

  • Perform regular vulnerability assessments and penetration testing to identify and remediate issues specific to mainframe architecture.
  • Strengthen security controls, including encryption mechanisms, access controls, and real-time monitoring.
  • Leverage threat intelligence feeds and implement robust incident detection and response protocols for mainframe-specific threats.

Business continuity management

  • Develop and regularly test recovery plans tailored to mainframe failure scenarios, incorporating automated backups and immutable data copies.
  • Ensure failover mechanisms are in place for continuous operations, and validate their effectiveness through simulation exercises.
  • Leverage cloud storage for mainframe backup data to enhance scalability, availability, and disaster recovery options.

Incident management

  • Integrate mainframe monitoring alerts into enterprise-wide service consoles for unified incident management.
  • Collaborate with the Security Operations Center (SOC) to ensure real-time transmission of and response to critical security events.
  • Develop automated response playbooks for common threat scenarios and continuously refine them to address emerging risks.

Governance and compliance

  • Use automated vulnerability scanning tools and compliance checks to streamline governance processes.
  • Maintain continuous adherence to regulatory standards with automated reporting and regular audits.
  • Design governance frameworks that evolve alongside regulatory updates to ensure ongoing compliance.

By addressing these areas, financial institutions can position their mainframe systems as a cornerstone of operational resilience, fully aligned with DORA’s requirements.

The last-minute mainframe checklist for DORA compliance

Building on these focus areas, here’s a specific checklist to ensure compliance readiness:

1. Assess operational resilience

  • Conduct stress tests and simulations to identify vulnerabilities.
  • Validate that disaster recovery (DR) and business continuity (BC) plans include mainframe-specific scenarios.
  • Ensure backup and failover systems can restore operations within required timeframes.
  • Ensure enough capacity to accurately process the data necessary for the performance of activities and the timely provision of services.

2. Verify data integrity and cyber resilience

  • Audit data protection mechanisms, including encryption protocols.
  • Review and update backup processes for secure, quick restoration.
  • Confirm security patches and updates are current.

3. Monitor third-party dependencies

  • Review third-party contracts for alignment with DORA.
  • Confirm vendors have implemented appropriate risk management measures.
  • Assess your ability to replace or compensate for third-party services during disruptions.

4. Strengthen incident reporting and response

  • Enable real-time incident detection and logging.
  • Update escalation procedures to integrate mainframe incidents into the broader response framework.
  • Validate that reporting mechanisms meet DORA’s timeliness and accuracy requirements.

5. Modernize risk and compliance tools

  • Implement AI-driven tools for real-time monitoring and vulnerability analysis.
  • Integrate operational risk management with cybersecurity risk management.
  • Automate compliance reporting to ensure accuracy and minimize effort.

6. Align governance with DORA standards

  • Assign specific responsibilities for mainframe resilience.
  • Document governance processes to demonstrate alignment with DORA.
  • Train executives on mainframe compliance needs.

7. Test business continuity in hybrid environments

  • Conduct continuity tests that include mainframe and hybrid systems.
  • Address integration issues to ensure seamless failover.
  • Ensure systems can share data and resources with modern technologies.

8. Validate regulatory reporting mechanisms

  • Automate and test DORA-compliant reporting systems.
  • Ensure reporting mechanisms meet regulatory deadlines.

9. Review cross-border data transfers

  • Audit data transfer processes for compliance with DORA and the EU’s General Data Protection Regulation (GDPR).
  • Implement safeguards to prevent unauthorized access or breaches.

10.Document evidence of compliance

  • Maintain detailed records of audits, tests, and system upgrades.
  • Ensure documentation is well-organized and accessible for audits.

The cost of non-compliance

Failure to comply with DORA can result in significant fines, reputational damage, and operational disruptions. For financial institutions, operational resilience is both a regulatory requirement and a cornerstone of customer trust and competitive advantage.

How BMC can help

BMC AMI offers comprehensive solutions for mainframe DevOps, AIOps, DataOps, SecOps, and hybrid cloud data protection to simplify mainframe management, enhance operational resilience, and streamline compliance. Our hybrid AI technologies help organizations predict and prevent disruptions, protect against cyberthreats, and maintain robust governance.

Conclusion

With DORA now in effect, now is the time to ensure your mainframe systems are ready. By addressing the focus areas above and following the checklist, you can not only achieve compliance, but also strengthen your organization’s operational resilience.

Start implementing these steps today and explore BMC’s solutions to simplify compliance and resilience for your mainframe systems.

]]>
The Digital Operational Resilience Act (DORA) and Control-M https://www.bmc.com/blogs/controlm-and-dora/ Thu, 16 Jan 2025 10:21:54 +0000 https://www.bmc.com/blogs/?p=54526 The Digital Operational Resilience Act (DORA) is a European Union (EU) regulation designed to enhance the operational resilience of the digital systems, information and communication technology (ICT), and third-party providers that support the financial institutions operating in European markets. Its focus is to manage risk and ensure prompt incident response and responsible governance. Prior to […]]]>

The Digital Operational Resilience Act (DORA) is a European Union (EU) regulation designed to enhance the operational resilience of the digital systems, information and communication technology (ICT), and third-party providers that support the financial institutions operating in European markets. Its focus is to manage risk and ensure prompt incident response and responsible governance. Prior to the adoption of DORA, there was no all-encompassing framework to manage and mitigate ICT risk. Now, financial institutions are held to the same high risk management standards across the EU.

DORA regulations center around five pillars:

Digital operational resilience testing: Entities must regularly test their ICT systems to assess protections and identify vulnerabilities. Results are reported to competent authorities, with basic tests conducted annually and threat-led penetration testing (TLPT) done every three years.

ICT risk management and governance: This requirement involves strategizing, assessing, and implementing controls. Accountability spans all levels, with entities expected to prepare for disruptions. Plans include data recovery, communication strategies, and measures for various cyber risk scenarios.

ICT incident reporting: Entities must establish systems for monitoring, managing, and reporting ICT incidents. Depending on severity, reports to regulators and affected parties may be necessary, including initial, progress, and root cause analyses.

Information sharing: Financial entities are urged by DORA regulations to develop incident learning processes, including participation in voluntary threat intelligence sharing. Shared information must comply with relevant guidelines, safeguarding personally identifiable information (PII) under the EU’s General Data Protection Regulation (GDPR).

Third-party ICT risk management: Financial firms must actively manage ICT third-party risk, negotiating exit strategies, audits, and performance targets. Compliance is enforced by competent authorities, with proposals for standardized contractual clauses still under exploration.

Introducing Control-M

Financial institutions often rely on a complex network of interconnected application and data workflows that support critical business services. The recent introduction of DORA-regulated requirements has created an urgent need for these institutions to deploy additional tools, including vulnerability scanners, data recovery tools, incident learning systems, and vendor management platforms.

As regulatory requirements continue to evolve, the complexity of managing ICT workflows grows, making the need for a robust workflow orchestration platform even more critical.

Control-M empowers organizations to integrate, automate, and orchestrate complex application and data workflows across hybrid and cloud environments. It provides an end-to-end view of workflow progress, ensuring the timely delivery of business services. This accelerates production deployment and enables the operationalization of results, at scale.

Why Control-M

Through numerous discussions with customers and analysts, we’ve gained valuable insights that reinforce that Control-M embodies the essential principles of orchestrating and managing enterprise business-critical workflows in production at scale.

They are represented in the following picture. Let’s go through, in a bottom-up manner.

Enterprise Production at Scale

Support heterogeneous workflows

Control-M supports a diverse range of applications, data, and infrastructures, enabling workflows to run across and between various combinations of these technologies. These are inherently hybrid workflows, spanning from mainframes to distributed systems to multiple clouds, both private and public, and containers. The wider the diversity of supported technologies, the more cohesive and efficient the automation strategy, lowering the risk of a fragmented landscape with silos and custom integrations.

End-to-end visibility

This hybrid tech stack can only become more complex in modern business enterprise. Workflows execute interconnected business processes across this hybrid tech stack. Without the ability to visualize, monitor, and manage your workflows end to- end, scaling to production is nearly impossible. Control-M provides clear visibility into application and data workflow lineage, helping you understand the relationships between technologies and the business processes they support.
While the six capabilities at the top of the picture above aren’t everything, they’re essential for managing complex enterprises at scale.

SLA management for workflows

Business services, from financial close to machine learning (ML)-driven fraud detection, all have service level agreements (SLAs), often influenced by regulatory requirements. Control-M not only predicts possible SLA breaches and alerts teams to take actions, but also links them to business impact. If a delay affects your financial close, you need to know it right away.

Error handling and notification

Even the best workflows may encounter delays or failures. The key is promptly notifying the right team and equipping them with immediate troubleshooting information. Control-M delivers on this.

Appropriate UX for multiple personas

Integrating and orchestrating business workflows involves operations, developers, data and cloud teams, and business owners, each needing a personalized and unique way to interact with the platform. Control-M delivers tailored interfaces and superior user experiences for every role.

Self-healing and remediation

Control-M allows workflows to self-heal automatically, preventing errors by enabling teams to automate the corrective actions they initially took manually to resolve the issue.

Support DevOps practices

With the rise of DevOps and continuous integration and continuous delivery (CI/CD) pipelines, workflow creation, modification, and deployment must integrate smoothly into release practices. Control-M allows developers to code workflows using programmatic interfaces like JSON or Python and embed jobs-as-code in their CI/CD pipelines.

Standards in production

Finally, Control-M enforces production standards, which is a key element since running in production requires adherence to precise standards. Control-M fulfills this need by providing a simple way to guide users to the appropriate standards, such as correct naming conventions and error-handling patterns, when building workflows.

Conclusion

DORA takes effect January 17, 2025. As financial institutions prepare to comply with DORA regulations, Control-M can play an integral role in assisting them in orchestrating and automating their complex workflows. By doing so, they can continue to manage risk, ensure prompt incident response, and maintain responsible governance.

To learn more about who Control-M can help your business, visit www.bmc.com/control-m.

]]>
Reducing Change Failures with a New BMC AI Agent https://www.bmc.com/blogs/reducing-change-failures-with-new-bmc-ai-agent/ Wed, 15 Jan 2025 17:45:04 +0000 https://www.bmc.com/blogs/?p=54525 Change is the largest source of production issues. For organizations practicing DevOps, changes created through continuous integration and continuous deployment (CI/CD) pipelines can cause significant outages if they are not properly assessed. Here’s the reality IT teams are working with, and why they need a better way to accelerate change while predicting and managing risk […]]]>

Change is the largest source of production issues. For organizations practicing DevOps, changes created through continuous integration and continuous deployment (CI/CD) pipelines can cause significant outages if they are not properly assessed. Here’s the reality IT teams are working with, and why they need a better way to accelerate change while predicting and managing risk across the enterprise:

  • While organizations trust their DevOps pipeline, the unforeseen risks of introducing rapid change into complex environments, no matter how small, are anxiety-inducing for DevOps teams.
  • When change occurs, organizations often lack visibility into and understanding of the change’s impact on the organization and its ecosystem.
  • Both the change advisory board (CAB) and the DevOps practice of deploying changes lack real-time operational data for accurate risk assessment.

Introducing BMC HelixGPT Change Risk Advisor: The new AI agent

The BMC Helix AIOps 25.1 release delivers on the promise of agentic artificial intelligence (AI) for service management and operations (ServiceOps). The core element of ServiceOps is to accelerate change while predicting and managing risk. This is accomplished by integrating IT service management (ITSM) and AI for IT operations (AIOps) tools and data to automate change risk identification and prediction in a single pane of glass. Among the most exciting capabilities of agentic AI is its ability to interact with a wide variety of tools and data in different scenarios, generating insights, executing tasks, and recommending remediations in a proactive manner.

BMC HelixGPT Change Risk Advisor is a new AI agent that guides change management and DevOps teams to implement system and software changes more rapidly and reliably. The AI agent identifies risky changes by analyzing operations and service management data together, summarizing the analysis, providing a change risk score, and recommending best actions. The AI agent investigates historical change data, the real-time deployment landscape, and additional context around the change. This enables change managers and DevOps teams to collaborate and apply better actions that reduce change failures.

Figure 1: BMC HelixGPT Change Risk Advisor AI agent.

Use case for BMC HelixGPT Change Risk Advisor

BMC HelixGPT Change Risk Advisor can be a powerful agent for reducing outages caused by high-risk changes. For example, when predicting the change risk for a database upgrade, the AI agent can comb through past situations or problems and assess the current database service health. The AI agent generates a risk score, an automated summary of the risk, and recommended actions, which are all relayed to the change management or DevOps teams. If further due diligence is recommended, the agent’s analysis of the change risk can be used to support decision-making. With the agent’s assistance, change managers and DevOps teams get:

  • Higher success rates by reducing change failures in dynamic systems
  • Proactive change risk predictions for all changes
  • Insights and recommendations throughout the entire change process

Reducing the change failure rate with BMC Helix

As systems grow more complex and interconnected, it becomes difficult to isolate changes, which increases the likelihood that a change in one area will cause unexpected issues elsewhere.

When it comes to reducing change failure rate, nothing is as impactful as getting your change risk prediction process right. BMC Helix ServiceOps provides a comprehensive solution to many of the problems we’ve discussed:

  • AI agent: With BMC HelixGPT Change Risk Advisor to interact with ITSM and AIOps data and workflows, the CAB and DevOps teams can accelerate change deployment without change anxiety.
  • Change management: By using the change management capabilities of ITSM, the AI agent can correlate past changes and determine their impact on operational variables such as service availability and health.
  • Observability: By utilizing BMC Helix observability and AIOps capabilities, the AI agent gains valuable insights into real-time system and service performance and health, enabling it to reliably predict change risk.

We’re continuing to optimize the change failure rate by catching unforeseen risks in real time. Whether through more integrated workflows or agent optimization to handle new variables and scenarios, each update aims to de-risk DevOps. To learn more about BMC HelixGPT Change Risk Advisor, book a demo here.

]]>
BMC Helix earns PinkVERIFY® certification across 18 IT Service and Operations Management process areas https://www.bmc.com/blogs/bmc-earns-pinkverify-certification/ Wed, 15 Jan 2025 17:41:30 +0000 https://www.bmc.com/blogs/?p=54533 We are delighted to announce that Pink Elephant has awarded PinkVERIFY® certification to BMC Helix across 18 areas of IT practice. This recognition by Pink Elephant, a globally respected name in IT service management (ITSM) consultancy and assessment for over 40 years, highlights BMC Helix’s ability to transform IT service and operations management. The PinkVERIFY […]]]>

We are delighted to announce that Pink Elephant has awarded PinkVERIFY® certification to BMC Helix across 18 areas of IT practice.

This recognition by Pink Elephant, a globally respected name in IT service management (ITSM) consultancy and assessment for over 40 years, highlights BMC Helix’s ability to transform IT service and operations management. The PinkVERIFY certification remains one of the most trusted benchmarks for ITSM tools, and BMC Helix has consistently exceeded expectations. BMC products have previously received multiple PinkVERIFY® certifications, and BMC Helix ITSM was the first solution endorsed for ITIL® 4 under the 2020 version of the assessment framework.

The expanded scope of PinkVERIFY certification

Since its launch in 1999, PinkVERIFY® has been recognized as a vendor accreditation of the highest standard. In 2023, it underwent its most significant change, expanding its scope to reflect a broader range of modern, real-world approaches to today’s complex IT environments. 

Formerly focused on ITIL®, PinkVERIFY® now reflects a solution’s ability to support multiple ITSM, International Organization for Standardization (ISO), and International Electrotechnical Commission (IEC) (IEC) standards versus a single framework. 

With this update, PinkVERIFY® has been extended to include a broader range of topics relevant to the demands of today’s complex IT environments. Its assessment extends into IT operations management and artificial intelligence technologies, such as generative AI, which play increasingly prominent roles in IT management.

BMC Helix’s comprehensive certification

The new PinkVERIFY® certification was awarded to BMC Helix after a demanding, hands-on assessment of its capabilities over several weeks in December 2024. To achieve compliance for each process area, we were required to demonstrate to Pink Elephant that our tools could fully support every assessment point. BMC Helix met every criterion across all 18 assessed process areas. 

As a result, certification was awarded to BMC Helix for each of the following processes:  

  • AI capability 
  • Availability management 
  • Capacity management 
  • Change management 
  • Configuration management 
  • Financial management 
  • Governance, risk, and compliance 
  • Incident management 
  • IT asset management 
  • IT operations management 
  • Knowledge management 
  • Monitoring and alerting 
  • Problem management 
  • Release and deployment management 
  • Request management 
  • Service catalog management 
  • Service desk 
  • Service level management 

Driving innovation with GenAI and ServiceOps

BMC Helix’s accreditation for a wider-than-ever set of process areas reflects BMC’s significant investment in ServiceOps and GenAI and the extensive capabilities of the unified BMC Helix platform, including the use of agentic AI to transform enterprise work, improve agility, reduce downtime, and enable fast resolutions. 

“We are delighted to receive Pink Elephant’s endorsement across so many processes,” said Margaret Lee, Senior Vice President and General Manager of Digital Service and Operations Management at BMC. “This endorsement reflects the confidence of a highly trusted, long-standing independent assessor in the quality and capabilities of the BMC Helix solution, especially for IT and enterprise service management.” 

Learn more about Pink Elephant, the PinkVERIFY® program, and BMC’s certification here. 

Experience BMC Helix

To learn more and experience BMC Helix’s PinkVerify®-certified capabilities, try our instant, free, guided demonstration at https://www.bmc.com/forms/bmc-helix-itsm-demo.html. 

Guided demo

]]>
What Is AIaaS? Artificial intelligence as a service explained https://www.bmc.com/blogs/ai-as-a-service-aiaas/ Wed, 15 Jan 2025 00:00:48 +0000 http://www.bmc.com/blogs/?p=12161 Today, almost all companies are using at least one type of as a service offerings as a way to focus on their core business and outsource other needs to third-party experts and vendors. Though software as a service has the largest global spend —$105 million spent in 2020 alone—IaaS and PaaS are expected to grow […]]]>

Today, almost all companies are using at least one type of as a service offerings as a way to focus on their core business and outsource other needs to third-party experts and vendors. Though software as a service has the largest global spend —$105 million spent in 2020 alone—IaaS and PaaS are expected to grow faster in the coming years.

Now the same as a service approach is being applied to a new field: AIaaS. AIaaS is short for Artificial Intelligence-as-a-service. The term and the product are on the rise, and we’re digging into what AIaaS means in this article.

We’ll look at:

Let’s get started.

What is AIaaS?

AIaaS stands for artificial intelligence as a service. It refers to off-the-shelf AI tools that enable companies to implement and scale AI techniques at a fraction of the cost of a full, in-house AI.

The concept of everything as a service refers to any software that can be called upon across a network because it relies on cloud computing. In most cases, the software is available off the shelf. You buy it from a third-party vendor, make a few tweaks, and begin using it nearly immediately, even if it hasn’t been totally customized to your system.

For a long time, artificial intelligence was cost-prohibitive to most companies:

  • The machines were massive and expensive.
  • The programmers who worked on such machines were in short supply (which meant they demanded high payments).
  • Many companies didn’t have sufficient data to study.

As cloud services have become incredibly accessible, AI is more accessible: companies can gather and store infinite data. This is where AI-as-a-service comes in.

Now, let’s detour into AI so that we have the right expectations when engaging with AIaaS.

Understanding AI

We hear it repeated over and over: artificial intelligence is a way to get machines to do the same kind of work that human brains can accomplish. This definition is the subject of significant debate, with technology experts arguing that comparing machines to human brains is the wrong paradigm to use. It may promote fear that humans can be taken over by machines.

The term AI can also be used as a marketing tactic for companies to show how innovative they are—something known as artificial AI or fake AI.

Before we start worrying about the technological singularity, we need to understand what AI actually is.

“Intelligence is the efficiency with which you acquire new skills at tasks you didn’t previously prepare for… Intelligence is not skill itself, it’s not what you can do, it’s how well and how efficiently you can learn new things.” — Francois Challot, AI Researcher at Google and creator of Keras

Machines that can adapt to new environments and create solutions to new problems are said to exhibit artificial intelligence. Just like humans are constantly reacting to new challenges, computers are now able to react in ways which their programmers didn’t explicitly program them for.

Importantly, though, AI isn’t created on its own—humans create it. We call some entity intelligent if it can do things humans normally do.

Today, machine learning is the leading type of AI. It’s the most mature of several areas of AI. Just like AI, though, there’s a lot of hype around ML versus what it actually is. Today, ML can do a lot of things, but it isn’t some pie in the sky solution that will solve all your organizational problems.

(Understand the differences in ML and AI.)

How does AI work?

The majority of AI makes use of algorithms. Algorithms are defined as a set of rules or a process that is followed, typically by a computer, to calculate or solve a problem. For AI algorithms, computers solve specific tasks by:

  • Studying very large amounts of data.
  • Making generalizations or statistical estimations.

AI algorithms are often broken into two types:

  • Machine learning algorithms, including classification and regression
  • Deep learning algorithms that employ deep neural nets

(Explore top ML algorithms.)

When these algorithms are applied in certain ways, computers can seem to act like a human brain:

  • Determining objects in a picture
  • Carrying on a spontaneous conversation with a human being
  • Responding to roadblocks from a driverless car
  • Chatting with humans with 24/7 availability

Companies want to take advantage of all the insight they can glean from data. For organizations, data might help them:

  • Better understand their customers and what they want
  • Find points in production and service delivery that can be automation
  • Understand why some people buy and some don’t

Any seemingly intangible information can turn out to be the competitive edge.

(Compare three simple data strategies and data ethics for businesses.)

AI is big business

If your company isn’t employing artificial intelligence yet, you will be soon. International Data Corporation (IDC) estimates that worldwide spending on AI will increase from the $50.1 billion spent in 2020 to more than $110 billion by 2024.

A farther look back shows that only $12 billion was spent in 2017. That means global spending is expected to have increased almost tenfold in only seven years.

It seems that all this spending is coming from many places—not just the big enterprises. Flexera recently reported on the wide-scale adoption of AI:

  • 28% of enterprises are experimenting with AI/ML.
  • 46% of enterprises are experimenting or plan to experiment with AI/ML.
  • AI/ML is the field companies are most experimenting in.

What does this mean? Surely that AI and ML are on most organizational radars today. The bigger takeaway might be that almost half of all enterprises are expected to be using the technology in the next few years.

The growth of AIaaS

For companies that can’t or are unwilling to build their own clouds and build, test, and utilize their own artificial intelligence systems, artificial intelligence as a service is the solution. This is the biggest draw: the opportunity to take advantage of data insights without needing the massive up-front investment in talent and resources.

Like other “as a service” options, the same benefits apply with AIaaS:

  • Staying focused on core business (not becoming data and machine learning experts)
  • Minimizing the risk of investment
  • Increasing the benefits you gain from your data
  • Improving strategic flexibility
  • Making cost flexible and transparent

Types of AIaaS

Common types of AIaaS include:

Chatbots & digital assistance

These can include chatbots that use natural language processing (NLP) algorithms to learn from conversations with human beings and imitate the language patterns while providing answers. This frees up customer service employees to focus on more complicated tasks.

These are the most widely used types of AIaaS today.

Cognitive computing APIs

Short for application programming interface, APIs are a way for services to communicate with each other. APIs allow developers to add a specific technology or service into the application they are building without writing the code from scratch. Common options for APIs include:

  • NLP
  • Computer speech and computer vision
  • Translation
  • Knowledge mapping
  • Search
  • Emotion detection

Machine learning frameworks

ML and AI frameworks are tools that developers can use to build their own model that learns over time from existing company data.

Machine learning is often associated with big data but can have other uses—and these frameworks provide a way to build in machine learning tasks without needing the big data environment.

Fully managed machine learning services

AI machine learning frameworks are the first step towards machine learning. This option is a way to add in richer machine learning capabilities using templates, pre-built models, and drag-and-drop tools to assist developers in building a more customized machine learning framework.

Benefits and drawbacks in AI

Benefits & drawbacks of AIaaS

Like any other “as a service” offering, the AI as a service business model brings value to companies without costing huge amounts. However, there are also distinct drawbacks to using a cloud-based AI system that no business should ignore.

Why should I use AIaaS?

  • Advanced infrastructure at a fraction of the cost. Successful AI and machine learning requires many parallel machines and speedy GPUs. Prior to AIaaS, a company may decide the initial investment and ongoing upkeep too much. Now, AI infrastructure as a service means companies can harness the power of machine learning at significantly lower costs. This means you can continue working on your core business, not training and spending on areas that only partially support decision-making.
  • Flexibility. Hand in hand with lower costs, there’s a lot of transparency within AIaaS: pay for what you use. Though machine learning requires a lot of compute power to run, you may only need that power in short amounts of time—you don’t have to be running AI non-stop.
  • Usability. While many AI options are open source, they aren’t always user-friendly. This means your developers are spending time installing and developing the ML technology. Instead, AIaaS is ready out of the box—so you can harness the power of AI without becoming technical experts first.
  • Scalability.Artificial intelligence as a service allows you to start with smaller projects to learn if it fits your needs correctly. As you gain experience with your own data, you can tweak your service and scale up or down as project demands change.

What are the challenges of AIaaS?

  • Reduced security. AI and machine learning depend on significant amounts of data, which means your company must share that data with third-party vendors. Data storage, access, and transit to servers must be secured to ensure the data isn’t improperly accessed, shared, or tampered with.
  • Reliance. Because you’re working with one or more third parties, you’re relying on them to provide the information you need. This isn’t inherently a problem, but it can lead to lag time or other issues if any problems arise.
  • Reduced transparency. In AIaaS, you’re buying the service, but not the access. Some consider as a service offerings, particularly those in ML, like a black box —you know the input and the output, but you don’t understand the inner workings, like which algorithms are being used, whether the algorithms are updated, and which versions apply to which data. This may lead to confusion or miscommunication regarding the stability of your data or the output.
  • Data governance. Particular industries may limit whether or how data can be stored in a cloud, which altogether may prohibit your company from taking advantage of certain types of AIaaS.
  • Long-term costs. Costs can quickly spiral with all “as a service” offerings and AIaaS is no exception. As you wade deeper into AI and machine learning, you may be seeking out more complex offerings, which can cost more and require that you hire and train staff with more specific experience. As with anything, though, the costs may be a wise investment for your company.

Vendors of AIaaS

You can probably guess the major vendors of AIaaS.

AI as a service examples include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), which are all industry-leading companies that have brought AIaaS offerings to many companies worldwide. Each vendor offers different types of bots, APIs, and machine learning frameworks, in addition to fully managed machine learning options.

Other AI as a service examples feature well-known technology firms, including Salesforce, Oracle, and SAP, moving into the Big 3’s territory.

Countless start-ups that are focusing on various parts of AIaaS, as well. As in all industries, it’s not uncommon for the larger companies to purchase the smaller companies to add the developed services to their portfolios.

Difference between AI as a service (AIaaS) and AI platform as a service (AIPaaS)

Cloud-based offerings make technology advances more accessible, affordable, and scalable–while helping businesses stay current with the latest innovations. These benefits are especially valuable for AI technologies–hence the rapid growth of AIaaS and AIPaaS.
The differences between AIaaS and AIPaaS revolve around their scope and purpose. AIaaS offers ready-made, off-the-shelf tools and APIs that address specific functions and needs. That approach is not adequate when you require customized models. AIPaaS provides the infrastructure and tools to build and train AI models that address a specific use case.

AI platform as a service (AIPaaS)

AIPaas is a specialized IaaS that provides a full AI solution development environment. Data scientists and advanced developers have the flexibility they need to manage data and then create and train AI models tailored to organizational needs. AIPaaS also offers infrastructure and tools for deploying and managing models.

Key differences

Key differences

Future of AIaaS

As a rapidly growing field, AIaaS has plenty of benefits that bring early-adapters to the table. But, its drawbacks mean there’s plenty of room for improvement.

While there may be bumps in the road while developing AIaaS, it’s likely to be as important than other “as a service” offerings. Taking these valuable services out of the hands of the few means that many more organizations can harness the power of AI and ML.

Related reading

]]>
New BMC HelixGPT Enhancements Resolve Network Issues and Vulnerability and Change Risks https://www.bmc.com/blogs/resolving-network-issues-and-vulnerability-and-change-risks/ Tue, 14 Jan 2025 15:52:44 +0000 https://www.bmc.com/blogs/?p=54493 Having the right contextual data is crucial for resolving network issues and understanding which security vulnerabilities to fix first, before they get exploited by bad actors. Additionally, assessing potential system and software changes by analyzing historic change data and current operational data can help prevent risky changes that are likely to cause issues. The latest […]]]>

Having the right contextual data is crucial for resolving network issues and understanding which security vulnerabilities to fix first, before they get exploited by bad actors. Additionally, assessing potential system and software changes by analyzing historic change data and current operational data can help prevent risky changes that are likely to cause issues.

The latest BMC Helix ITOM 25.1 release helps network operations, IT operations (ITOps), and DevOps teams resolve network and cloud issues faster, address critical vulnerability risks with less manual effort, and reduce change failures with new key enhancements:

  • Network topology infrastructure and deep host discovery—Discover network topology and relationships using Border Gateway Protocol (BGP) data and get deep host discovery on Oracle Cloud Infrastructure (OCI) hosts that detects operating system and other software details from cloud virtual machines using existing API scans and credentials.
  • New just-in-time integrators for ServiceNow, Jira, Splunk, and Elastic—Get artificial intelligence (AI)-driven insights, summaries, and recommendations for incidents from BMC Helix ITSM, ServiceNow, and Jira, plus situational log insights for a larger set of log data from BMC Helix Log Analytics, Splunk, and Elastic tools.
  • BMC HelixGPT Vulnerability Resolver—Consolidate vulnerability data get a unified view of vulnerabilities impacting critical business services, and speed recommended remediations for those vulnerabilities with a built-in AI agent.
  • BMC HelixGPT Change Risk Advisor—Identify risky changes for change managers and DevOps engineers by analyzing operations and service management data with the integrated AI agent.

Network topology infrastructure and deep host discovery

Incomplete network topology discovery leads to poor network visibility and inaccurate root cause correlation. Insufficient visibility into complex network infrastructure makes it difficult to troubleshoot, identify issues, and perform root cause analysis. Incomplete topology combined with inaccurate correlation of network events can lead to misinterpretation of network issues and ineffective troubleshooting.

BMC Helix now discovers network logical topologies using Border Gateway Protocol (BGP) data across IT infrastructure spanning data centers, campuses, and branches, etc. This functionality enables the BMC Helix platform to complement existing network infrastructure awareness with network logical relationships for network operations teams.

Accurate root cause analysis relies on service models including understanding of infrastructure, software and relationships. A new feature that discovers cloud hosts in detail via the Oracle Cloud Infrastructure (OCI) API has been introduced. This enables IT Operations teams to get a detailed understanding of operating system details, software inventory and relationships from OCI virtual machines without relying on IP-based scans. Since it uses existing cloud credentials, it also reduces the operational overhead and complexity of managing individual host credentials.

Network topology infrastructure discovery

Network topology infrastructure discovery

New integrators for ServiceNow, Jira, Splunk, and Elastic

Some of the most exciting capabilities of BMC Helix AIOps are the solution’s ability to interact with a wide variety of tools and data to generate insights and execute tasks in a proactive manner. In the 25.1 release, new just-in-time integrators for ServiceNow and JIRA bring incident data into BMC Helix AIOps to provide insights, summarization, and recommendations to fix production issues. The integrators for Splunk and Elastic enable BMC Helix AIOps to analyze just-in-time log data from Splunk and Elastic. This provides comprehensive, enterprise-wide log insights and a better understanding of trends and patterns within log data to identify the root cause of system issues.

BMC HelixGPT Just-in-Time Integrator Manager

BMC HelixGPT Just-in-Time Integrator Manager

BMC HelixGPT Vulnerability Resolver

Security operations (SecOps) teams are responsible for identifying security issues for DevOps teams to fix. However, these security issues are so numerous, that it’s difficult for DevOps teams to prioritize which issues to fix first. BMC HelixGPT Vulnerability Resolver, an AI agent within BMC Helix AIOps, helps security operations and DevOps teams improve compliance and risk management by providing:

  • A risk view: Display vulnerability risks side by side and in context of the service health in the “Risks” tab within the BMC Helix AIOps UI.
  • Criticality insight: Help IT teams understand the impact on business services and enable them to prioritize remediation.
  • Vulnerability Best Action Recommendation (VBAR): Get faster recommended remediations for critical vulnerabilities.

BMC HelixGPT Vulnerability Resolver enables DevOps teams to prioritize the order of vulnerability issues to fix, create remediation tickets for each affected asset in one click, and resolve vulnerabilities through patching or updating configuration changes.

BMC Helix GPT Change Risk Advisor

Changes are the largest source of production issues. Most organizations don’t have a single approach to delivering changes, which increases the risk of introducing bugs or performance issues in production. In the traditional change approval process, the change advisory board (CAB) makes the final decision to approve or reject proposed changes by performing risk analysis and directing the implementation of changes. For DevOps teams pushing thousands of changes daily, the CAB process is simply too slow, so DevOps teams follow their own practice of using automation and tooling to merge code changes and automate testing steps to deploy changes more frequently.

The recently announced BMC HelixGPT Change Risk Advisor is an AI agent designed to guide change management and DevOps teams in deploying system and software changes more rapidly and reliably. Change Risk Advisor analyzes historic change data and current operational data to present a change risk score so that change management, DevOps, platform engineering or site reliability engineering (SRE) teams can quickly assess the risk of making a change prior to putting it in production.

BMC HelixGPT Change Risk Advisor

BMC HelixGPT Change Risk Advisor.

New configurable dashboards for managed service providers (MSPs) and multi-tenancy users

In addition to all of the great new capabilities added to BMC Helix ITOM, MSPs or large organizations running multiple tenants of BMC Helix now have a dashboard view of critical issues across their tenant instances of the software that gives IT organizations:

  • Visibility into critical data, including separate panels for each instance.
  • The ability to look across hundreds of tenant instances for better and faster issue alerting and response.
BMC Helix Multi-tenant dashboard

BMC Helix multi-tenant dashboard.

To learn more about how these new BMC Helix capabilities can help you transform your IT operations, contact us for a consultation.

]]>