• Courses
    • Oracle
    • Red Hat
    • IBM
    • ITIL
    • PRINCE2
    • Six Sigma
    • Microsoft
    • TOGAF
    • Agile
    • Linux
    • All Brands
  • Services
    • Vendor Managed Learning
    • Onsite Training
    • Training Subscription
  • Managed Learning
  • About Us
    • Contact Us
    • Our Team
    • FAQ
  • Enquire

OUR BLOG


Category: IBM

How Powerful Data and AI base may Support an Effective ESG Strategy

Posted on April 24, 2023April 24, 2023 by Marbenz Antonio

Why AI is critical to meet rising ESG demands | VentureBeat

An effective data architecture should be able to facilitate business intelligence and analysis, automation, and AI, all of which can enable organizations to take advantage of market opportunities, enhance customer value, achieve significant efficiencies, and mitigate risks such as supply chain disruptions. Additionally, a well-designed data foundation can play a critical role in managing ESG (environmental, social, and governance) commitments. Fortunately, achieving ESG goals can also benefit businesses, as sustainability initiatives can enhance business value for organizations that are committed and capable of effectively executing them.

Integrating data and using insights to help drive environmental initiatives

Data-driven insights can assist organizations in understanding their performance and measuring progress toward their ESG objectives. Such insights can also be used to drive operational efficiency, and credible environmental reporting necessitates factual data. To achieve this, organizations should implement a modern data architecture and data governance approach. This will enable users to access relevant data quickly and facilitate self-service, regardless of its location, thereby providing a strong foundation for ESG programs and insights.

After determining your data needs, you may have to obtain data from different operational systems and applications and integrate and arrange them for easy access by stakeholders throughout your organization. These stakeholders may include real estate, finance, HR, procurement teams, and the sustainability team. With a unified dataset, everyone can make informed decisions, ranging from goal setting to prioritizing sustainability investments.

Supporting the increasingly important social and governance pillars

The social pillar of ESG includes reporting obligations that intersect with both organizational risk and human impact. As AI increasingly informs HR decisions such as hiring, evaluation, and promotion, organizations must respond to new and expanding regulations while also addressing ESG standards.

Adopting a data architecture that can support AI governance is increasingly crucial for organizations. This means implementing sound data governance practices that ensure access is limited to authorized processes and stakeholders, while also ensuring transparency and explainability in the use and trustworthiness of AI. The approach should also provide sufficient metadata to enable HR decision-makers to identify which decisions and processes are informed by AI while maintaining privacy in the data. Implementing a data fabric architecture can enhance an organization’s governance and oversight capabilities and strengthen its ability to manage various forms of risk.

How a data fabric architecture can support ESG efforts

ESG initiatives can benefit greatly from a data architecture that allows for the collection, integration, and standardization of data from multiple sources and enables broad access to it by different stakeholders. An architectural approach known as data fabric simplifies data access facilitates self-service data consumption and supports the integration of data sources, pipelines, and AI applications. This approach enhances data quality, stewardship, and observability through machine learning-based automation. With the expansive nature of ESG data and its involvement of various departments, partners, and suppliers, a data fabric can provide the necessary governance, integration, and insights at scale.

To report on all three ESG pillars, it’s important to assess your framework and determine what data is necessary for a transparent and credible disclosure report. With a data fabric architecture, accessing and updating the ESG reporting framework is made simpler, enabling teams to deliver reports more efficiently.

Conclusion

Incorporating ESG considerations into business decisions is increasingly critical for companies to meet stakeholder expectations and regulatory requirements. By leveraging data architecture and AI, organizations can effectively gather, integrate, and analyze ESG data to support their initiatives, measure progress, and enhance transparency. This can lead to improved risk management, operational efficiency, and sustainable growth while meeting both business and societal goals.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in IBMTagged IBMLeave a Comment on How Powerful Data and AI base may Support an Effective ESG Strategy

How to Overcome Bias in Algorithmic AI among Global Executives and HR

Posted on April 18, 2023April 18, 2023 by Marbenz Antonio

How to stop computers being biased | Financial Times

The trend of monitoring HR tools and applications for bias is gaining momentum globally, primarily due to several international and local data privacy laws and the US Equal Employment Opportunity Commission (EEOC). As a part of this trend, the New York City Council has passed new regulations that mandate organizations to perform annual bias audits on automated employment decision-making tools utilized by HR departments.

Enforcement of the new regulations, passed in December 2021, will necessitate organizations that utilize algorithmic HR tools to perform an annual bias audit. Any organization that fails to comply with the new legislation may be subjected to fines ranging from a minimum of USD 500 to a maximum of USD 1500 per infringement. In anticipation of this transition, specific organizations are establishing a yearly assessment, mitigation, and examination procedure. Here’s a recommendation for how this procedure may be implemented.

Step one: Evaluate

For organizations to have their hiring and promotion systems assessed, it’s essential to take an active approach by educating stakeholders on the significance of this procedure. A diverse team responsible for evaluation, comprising HR, Data, IT, and Legal professionals, can be instrumental in navigating the constantly evolving regulatory framework related to AI. This team should be integrated into the organization’s business processes, and its role should be to assess the entire process from sourcing to hiring and scrutinize how the organization sources, screens, and recruits both internal and external candidates.

The evaluation team must scrutinize and record each system, decision point, and vendor based on the population they cater to, including hourly workers, salaried employees, various pay groups, and countries. While some third-party vendor information may be confidential, the evaluation team should still examine these processes and establish protective measures for vendors. It’s vital for any proprietary AI to be transparent, and the team should strive to promote diversity, equity, and inclusion in the hiring process.

Step two: Impact testing

As governments worldwide enforce regulations pertaining to the use of AI and automation, organizations need to assess and revise their processes to ensure compliance with the new mandates. This entails conducting meticulous scrutiny and testing of processes that involve algorithmic AI and automation, taking into account the specific regulations applicable in each state, city, or locality. Given the varying degrees of rules in different jurisdictions, it is crucial for organizations to stay well-informed and adhere to the requirements to mitigate any potential legal or ethical ramifications.

Step three: Bias review

Once the evaluation and impact testing has been concluded, the organization can commence the bias audit, which may be mandated by law and should be performed by an impartial algorithmic institute or a third-party auditor. It is crucial to select an auditor with expertise in HR or Talent, who can be trusted to provide explainable AI and who holds RAII Certification and DAA digital accreditation. Our organization is well-equipped to aid companies in becoming data-driven and achieving compliance. If you require any assistance, please don’t hesitate to contact us.

Data and AI Governance’s Role

Having a suitable technology blend can be critical to ensuring an effective data and AI governance strategy, with a contemporary data architecture like data fabric being a vital element. Policy orchestration is an excellent tool within a data fabric architecture that can simplify the intricate AI audit processes. By incorporating AI audit and associated processes into the governance policies of your data architecture, your organization can gain insights into areas that necessitate ongoing scrutiny.

What will happen next?

IBM Consulting has been assisting clients in establishing an evaluation process for bias and other related areas. The most challenging aspect is setting up the initial evaluation and taking stock of every technology and vendor that the organization engages with for automation or AI. Nevertheless, implementing a data fabric architecture can streamline this process for our HR clients. A data fabric architecture offers clarity into policy orchestration, automation, AI management, and the monitoring of user personas and machine learning models.

Organizations must recognize that this audit is not a one-time or stand-alone event. It is not only about the regulations enacted by a single city or state. These laws are part of a sustained trend where governments are intervening to mitigate bias, establish ethical AI use, safeguard private data, and reduce harm resulting from mishandled data. Therefore, organizations must allocate funds for compliance costs and form a cross-disciplinary evaluation team to develop a regular audit process.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in IBMTagged IBMLeave a Comment on How to Overcome Bias in Algorithmic AI among Global Executives and HR

Empowering a Successful ESG Strategy through a Robust Foundation of Data and AI

Posted on April 17, 2023April 17, 2023 by Marbenz Antonio

Data and AI can predict success | WPP

Organizations can rapidly seize market opportunities, improve customer value, achieve significant efficiencies, and mitigate risks such as supply chain disruptions with the help of a well-designed data architecture that supports business intelligence, analysis, automation, and AI. Moreover, a well-designed data foundation can revolutionize the way organizations handle ESG (Environmental, Social, and Governance) commitments. The good news is that the advantages of business benefits and ESG benefits are not contradictory but instead complement each other. By being dedicated and efficient in execution, organizations can increase business value through their sustainability efforts.

Integrating data and using insights to help drive environmental initiatives

Utilizing data that is based on facts can aid organizations in comprehending their performance and assessing their progress toward achieving widespread ESG objectives. The insights produced from this data can help organizations advance their ESG initiatives and enhance operational efficiency. Additionally, trustworthy environmental reporting should be underpinned by accurate data. To accomplish this, organizations need to establish the appropriate foundation, including a modern data governance approach and architecture. By implementing modern data architecture, organizations can provide self-service access to relevant data for their roles, regardless of their location, enabling them to swiftly gain insights.

After identifying the necessary data requirements, it may be necessary to obtain data from various operational systems and applications, integrate them, and arrange them in an easily accessible format for stakeholders throughout the organization. These stakeholders may include teams such as real estate, finance, HR, procurement, and sustainability. By utilizing the same dataset, all stakeholders can make informed decisions, from establishing goals to determining which sustainability investments should be prioritized.

Supporting the increasingly important social and governance pillars

Some reporting obligations connect organizational risk with human impact and are categorized as part of the social component of ESG. As AI becomes increasingly involved in HR decisions, such as recruitment, performance assessment, and promotion, an organization’s obligation to comply with expanding regulations will increasingly overlap with the need to meet ESG standards.

Adopting a data architecture that accommodates AI governance is now imperative for organizations. Such a framework should support appropriate data governance, including limiting access to authorized processes and stakeholders, while also promoting transparency and explainability in the use and reliability of AI. The methodology should be designed to provide sufficient metadata to enable key decision-makers in HR to recognize which processes and decisions are influenced by AI while preserving anonymity and confidentiality in the data. Implementing a data fabric architecture can improve an organization’s ability to manage and oversee key governance areas, while also enhancing its capability to identify and mitigate various types of risks.

How a data fabric architecture can support ESG efforts

An effective data architecture can improve an organization’s ability to implement ESG initiatives by supporting the collection, integration, and standardization of data from diverse sources and providing access to a broad range of stakeholders. A data fabric is an architectural approach that simplifies data access, enabling self-service data consumption at scale. This architecture allows for modeling, integration, and querying of data sources, the building of data pipelines, real-time data integration, and the running of AI-driven applications. Additionally, a data fabric can improve data reliability by providing enhanced data observability and automating data quality tasks across platforms using machine learning. With the vast array of ESG data elements that involve numerous departments within a company and extend to partner and supplier networks, a data fabric can help facilitate governance, integration, and data analysis at scale.

To report on all three pillars of ESG, an organization should start by assessing its framework and identifying the necessary data to support a transparent and credible disclosure report. A data fabric architecture can simplify data access, enabling teams to evaluate and update their ESG reporting framework efficiently and deliver reports more effectively.

Conclusion

A strong foundation of data and AI can help organizations empower their ESG strategy by enabling them to collect and analyze relevant data, identify risks and opportunities, and make informed decisions. With a modern data architecture like data fabric, organizations can achieve better data governance, enhance data observability, and automate data quality tasks. Ultimately, a successful ESG strategy can lead to improved business value and a more sustainable future for all.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in IBMTagged IBMLeave a Comment on Empowering a Successful ESG Strategy through a Robust Foundation of Data and AI

Using the Power10 Chip to Speed Up AI Inferencing

Posted on March 20, 2023March 20, 2023 by Marbenz Antonio

IBM Archives - The Next Platform

An inferencing model refers to a type of model that has been trained to identify patterns of interest in data, with the goal of gaining insights from the data.

Compared to training an artificial intelligence (AI) model, inferencing doesn’t require as much computing power. As a result, it’s feasible and even more energy-efficient to perform inferencing without additional hardware accelerators, like GPUs, and to do so on edge devices. It’s not uncommon for AI inferencing models to run on smartphones and similar devices using just the CPU. In fact, many picture and face filters found in social media phone apps rely on AI inferencing models.

IBM’s Power10 chip

IBM was a trailblazer in incorporating on-processor accelerators for inferencing into its IBM Power10 chip, which it dubbed the Matrix Math Accelerator (MMA) engines. By doing so, the Power10 platform is able to outpace other hardware architectures in terms of speed without requiring the use of additional GPUs, which would consume more energy. This means the Power10 chip can derive insights from data more quickly than any other chip architecture while consuming significantly less energy than GPU-based systems. That’s why it’s an optimal choice for AI applications.

When using IBM Power10 for AI, particularly for inferencing, AI DevOps teams don’t need to exert any additional effort. This is because data science libraries, including openBLAS, libATen, Eigen, and MLAS, among others, have already been optimized to utilize the Matrix Math Accelerator (MMA) engines. Consequently, AI frameworks that leverage these libraries, such as PyTorch, TensorFlow, and ONNX, are already able to take advantage of the on-chip acceleration. These optimized libraries can be accessed through the RocketCE channel on anaconda.org.

IBM Power10 can accelerate inferencing by utilizing reduced-precision data. Rather than using 32-bit floating point data, for instance, the inference model can be fed with 16-bit floating point data, which enables the processor to process twice as much data for inferencing simultaneously. This approach can be effective for some models without compromising the accuracy of the inferred data.

Inferencing is the final phase of the AI DevOps cycle, and the IBM Power10 platform was purposefully designed to be AI-optimized. As a result, clients can extract insights from data in a more cost-effective manner, both in terms of energy efficiency and by reducing the requirement for additional accelerators.

Conclusions

Leveraging the IBM Power10 chip can significantly accelerate AI inferencing while reducing energy consumption and the need for additional accelerators. The Matrix Math Accelerator (MMA) engines built into the chip can enhance the speed and efficiency of inferencing processes without requiring any additional effort from AI DevOps teams. Furthermore, the ability to process reduced-precision data can further enhance the performance of the inferencing model without sacrificing accuracy. All of these factors make the IBM Power10 chip an ideal choice for clients seeking to extract insights from data in a cost-effective manner.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in IBMTagged IBMLeave a Comment on Using the Power10 Chip to Speed Up AI Inferencing

Get your Cloud Access Management Data and Examine it

Posted on March 20, 2023March 20, 2023 by Marbenz Antonio

3 Steps to Implement Identity & Access Management | OneLogin Blog

As an IBM Cloud account holder, it’s your responsibility to establish and supervise access management for your cloud resources. They discussed methods for obtaining information on IBM Cloud account privileges and enhancing security by detecting inactive identities. In this blog post, we’ll provide an overview of available APIs that enable you to acquire identity and access management (IAM) and resource data. Following that, we’ll demonstrate how to examine this security data. By utilizing these insights, you can enhance the security of your IBM Cloud account and its resources.

Numerous techniques exist for analyzing access management data, but our preferred method is to extract the data and save it in a relational database. This enables us to merge data from various origins and execute SQL queries, facilitating the creation of security reports.

Overview of IBM Cloud APIs for platform services.
Overview of IBM Cloud APIs for platform services.

Overview: Access management data

If you have experience working with IBM Cloud and have explored security and compliance in the past, you might already be familiar with all the resources listed below for enhancing account security:

  • Activity data logged to Activity Tracker.
  • Runtime logs found in Log Analysis.
  • Security posture analysis performed by the Security and Compliance Center.
  • IAM reports on inactive identities and inactive policies.

Apart from the resources mentioned above, there exists data related to the account, its resources, user and service IDs, and their permissions. We refer to this data as “access management data” in this article. There are numerous ways to access and retrieve this data, including through the IBM Cloud console (UI), command line interface (CLI), and other interfaces. However, we will concentrate on the Application Programming Interfaces (APIs) for the IBM Cloud platform services in this article (as displayed in the screenshot above). Their documentation is available in the API and SDK reference library under the Platform category.

The key IBM Cloud APIs relevant to access management data are as follows:

  • User Management to retrieve a list of users in the cloud account to analyze
  • IAM Identity Services to look into service IDs, trusted profiles, and API keys
  • IAM Access Groups for details on access groups and their members
  • IAM Policy Management to analyze access policies of access groups, service-to-service authorizations, and access roles
  • Resource Manager for details on resource groups (which are often referenced in access policies)
  • Resource Controller to retrieve information about service instances

Although there are other APIs accessible, the ones listed above are the primary ones. These APIs provide a (mostly static) overview of the security configuration by collecting data. This overview is similar, in a general sense and disregarding specific details, to the evaluation performed by the IBM Cloud Security and Compliance Center.

To use each of the API functions, an IAM access token is required, and each returns a JSON data set. However, the true worth of these APIs is in combining the data they provide to create a comprehensive view of the security setup – similar to assembling a puzzle from numerous pieces. This is the first step toward security analysis. The data from all APIs can either be held briefly in memory (for generating a few reports) or persisted for more in-depth analysis. They chose to persist the data by breaking down the JSON objects into relational tables. This enables us to utilize SQL queries and leverage their expressive capabilities for analysis.

It’s worth noting that the analysis we perform does not encompass any dynamic membership rules or context- or time-based access decisions. Such decisions necessitate more dynamic data and are made during IAM processing. We do not aim to replicate IAM decisions as they are highly contextual and dynamic. Instead, their analysis helps in identifying potential areas of concern within the security setup that may require further investigation and possible enhancement.

Retrieve and store

To construct our foundation using access management data, they began by transforming various JSON objects into relational tables. Several JSON objects have nested data, such as when listing policies, where the results include metadata, subjects, roles, and resource information associated with the policy. Consequently, their data store has four tables related to policies. Similar transformations are required for other API results, resulting in the database schema illustrated below:

Entity Relationship diagram for the database schema.
Entity Relationship diagram for the database schema.

They decided to use Python to retrieve and store the data by leveraging pre-existing code from their past projects. Depending on the API function, retrieving data may necessitate paging through result sets. Typically, a single result is limited to 100 objects. Some API functions require additional parameters for obtaining enriched results, which include supplementary information that is beneficial for security analysis.

The code employs SQLAlchemy, which is a Python database toolkit, to interact with the data store. This provides the flexibility to switch between different backend databases, such as SQLite, PostgreSQL, or Db2 on Cloud, with ease.

Analyze cloud access management

Now that they have established the data store, they can proceed with the analysis of the cloud access management data. By consolidating data that is typically dispersed across different console pages or requires multiple API calls/CLI commands, they can effortlessly address security-related inquiries, such as:

  • Which cloud service instances are referenced in access policies but do not exist?
  • Which cloud service instances exist but are not used in any access group and their policies?
  • Which users (or service IDs or trusted profiles) are not a member of any access group?
  • Which access groups do not have any policies with Reader or Viewer roles?
  • Which access groups do not reference any region or resource group in their policies?

The SQL queries required to answer the above questions can be executed from a Python script in a Jupyter or Zeppelin notebook, or any other SQL client. A section of a basic text-based report generated by a straightforward Python script is depicted in the screenshot below. The associated SQL statement incorporates multiple tables from our data store using join operations:

Report generated on existing IBM Cloud IAM Access Groups.
Report generated on existing IBM Cloud IAM Access Groups.

Conclusions

Analyzing cloud access management data is crucial to improve the security of your IBM Cloud account and its resources. The IBM Cloud platform services provide a set of APIs that allow you to obtain identity and access management (IAM) and resource data, which can be analyzed to gain insights into your cloud security setup. By combining data from multiple sources and running SQL queries, you can generate security reports and answer important security-related questions. Using tools like Python and SQLAlchemy, you can easily retrieve and store the data in a relational database, enabling deeper analysis and reporting. By taking advantage of these resources, you can enhance the security of your IBM Cloud account and better protect your resources.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in IBMTagged IBM, IBM Cloud ServicesLeave a Comment on Get your Cloud Access Management Data and Examine it

How to Secure Your Supply Chain in 5 Easy Steps

Posted on March 13, 2023March 13, 2023 by Marbenz Antonio

3 Strategies to Secure Your Digital Supply Chain

One way to secure supply chains from a chain reaction of cyberattacks is by utilizing IBM Security Supply Chain Cyber Risk Management Services.

The supply chain is an attractive target for cybercriminals because it involves various third-party organizations, vendors, and manufacturers that have access to the same data and systems. A successful cyberattack on a single point in the supply chain can trigger a domino effect of destruction, causing significant operational disruption, financial losses, and reputational damage to the organization and its partners. This highlights the potential for long-term negative consequences on the reputation of the affected organization and the trust of its partners and customers.

Cyberattacks in manufacturing and supply chains

As per the 2023 IBM Security X-Force Threat Intelligence Index, the manufacturing sector witnessed the most extortion cases among all industries, accounting for 30% of such incidents. Over 25% of the total attacks, including ransomware, business email compromise (BEC), and DDoS, were related to extortion. Given the manufacturing industry’s low threshold for downtime and vulnerability to double-extortion tactics, it becomes an alluring target for cybercriminals.

Over 50% of security breaches are linked to supply chain and third-party suppliers, costing an average of USD 4.46 million. Due to the constantly changing and intricate nature of the supply chain, it is challenging for organizations to keep track of the latest cybersecurity risks and identify possible weaknesses. If a cyberattack does take place, it may be difficult to pinpoint the origin of the breach. This confusion can delay the response time, and in the case of a data breach, every moment is important.

As per the IBM Security X-Force Threat Intelligence Index, there has been a minor decrease in ransomware attacks, but the execution time has decreased by 94% in recent years. What previously took months for attackers can now be accomplished in just a few days. This rapid pace of cyberattacks necessitates a proactive and threat-focused cybersecurity strategy for organizations.

Supply chains are highly susceptible to cyberattacks due to the potentially catastrophic consequences of a security breach. Both the organizations within the supply chain and the cybercriminals are aware of this vulnerability.

To protect against cyberattacks, it is essential to comprehend their occurrence and method of operation. When implementing cyber risk management, it is crucial to consider the different types of cybersecurity incidents that can potentially harm the supply chain. These incidents include phishing attacks, malware infections, data breaches, and ransomware attacks.

How to secure your supply chain

In today’s digital environment, securing the supply chain through cyber risk management is critical. Several organizations have an uncoordinated approach to supply chain security, which presents challenges such as identifying and managing risks, assessing third-party software, limited threat intelligence for swift decision-making, and inadequate operational resilience. To enhance their cybersecurity posture, supply chains must adopt a proactive, well-defined, and adaptive approach, utilizing data and AI optimization.

Consider incorporating the following five best practices to develop a cyber risk management plan that safeguards your supply chain:

  1. Conduct a risk assessment: Conduct frequent evaluations of cyber risks associated with your supply chain, including the systems and processes utilized by your suppliers. Detect any potential weaknesses and prioritize the most critical ones with significant business consequences for prompt mitigation.
  2. Establish security protocols: Establish well-defined security protocols for your suppliers, comprising guidelines for data protection, access control, and incident response. Confirm that your suppliers have implemented adequate security measures such as firewalls, encryption, strong passwords, and multi-factor authentication.
  3. Implement continuous monitoring: Maintain continuous surveillance of your supply chain for any security incidents, such as hacking attempts, data breaches, and malware infections. Create an incident response plan in case of a security breach and periodically conduct tabletop or immersive exercises to improve muscle memory for executing the plan.
  4. Encourage supplier education: Numerous organizations provide cybersecurity training and education to their workforce to secure company data and assets. If your supplier doesn’t offer structured learning, contemplate extending cybersecurity education and training to your suppliers on best practices and the significance of safeguarding sensitive data, or direct them to available free resources. Motivate them to implement robust security measures and remain attentive to cybersecurity threats.
  5. Regularly review and update policies: Consider regularly reviewing and revising your cyber risk management policies to keep them current and applicable. This will assist you in staying ahead of emerging threats and maintaining the security of your supply chain.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in IBMTagged IBMLeave a Comment on How to Secure Your Supply Chain in 5 Easy Steps

According to IBM’s Metastore aaS there is No Lake Without Metadata

Posted on March 1, 2023March 1, 2023 by Marbenz Antonio

What is a Data Lake? | TIBCO Software

Discovering the enhanced functionality of IBM Cloud for constructing and overseeing cloud data lakes using IBM Cloud Object Storage.

Specifically, it elucidates the function of table metadata and how the IBM Cloud Data Engine service provides this crucial element for your data lake.

There is no revelation that metadata is an important component that requires management in data and analytics solutions. This topic is often linked with data governance, and rightly so, as this type of metadata guarantees effortless discoverability, safeguarding of data, and tracking of data lineage.

Nevertheless, metadata encompasses more than just data governance, as it also encompasses what is known as technical metadata. This refers to information about a dataset’s schema, data types, and statistical details regarding the values in each column. Technical metadata is especially important when discussing data lakes because unlike integrated database repositories like RDBMS that have built-in technical metadata, it is a separate component in a data lake that requires explicit setup and maintenance.

Usually, this component is known as the meta store or table catalog. It comprises technical details regarding your data that are necessary to build and run analytical queries, particularly SQL statements.

The growing adoption of data lakehouse technology is driving technical metadata to be partly collocated and stored alongside the data itself in combined table formats such as Iceberg and Delta Lake. However, this does not negate the requirement for a centralized and dedicated meta store component since table formats can only manage metadata at the table level. Data is usually stored across multiple tables in a more or less complex table schema, which may also include details about referential relationships between tables or logical data models referred to as views.

To ensure optimal performance, a metastore component or service is essential in every data lake. The Hive Metastore is the most commonly used metastore interface, which is supported by a wide range of big data processing engines and libraries. Despite its origins in the Hadoop ecosystem, it is no longer limited to or reliant on Hadoop and is often utilized in Hadoop-free environments, such as in cloud-based data lake solutions.

The metadata stored in a Hive Metastore is equally vital as the actual data in the data lake and should be treated with the same level of importance. Therefore, it’s crucial to ensure the persistence and high availability of the megastore’s metadata and include it in any disaster recovery plan.

IBM launches IBM Cloud Data Engine

IBM launches IBM Cloud Data Engine

As part of our continuous efforts to enhance IBM Cloud’s native data lake capabilities, we introduced the IBM Cloud Data Engine in May 2022. This addition builds upon our existing serverless SQL processing service, previously referred to as IBM Cloud SQL Query, by incorporating a fully managed Hive Metastore functionality.

Each instance of IBM Cloud Data Engine is now a dedicated namespace and instance of a Hive Metastore, providing the ability to manage, configure, and store metadata related to your table and data model across all of your data lake data on IBM Cloud Object Storage. You can be assured that the Hive Metastore data is always available, as it is integrated into the Data Engine service itself. Additionally, the serverless model applies to the Hive Metastore, meaning that you are only charged for the actual requests made, without any fixed costs for having a Data Engine instance with its own metadata in the Hive Metastore.

This integration seamlessly incorporates the serverless SQL-based functions for data ingestion, data transformation, and analytical querying that IBM Cloud Data Engine inherits from the IBM Cloud SQL Query service.

This seamlessly integrates with the serverless SQL-based data ingestion, data transformation and analytic query functions that IBM Cloud Data Engine inherits from the IBM Cloud SQL Query service:

In addition, Data Engine can now function as a Hive Metastore, enabling it to integrate with other big data runtimes that are deployed and provisioned elsewhere. For example, you can connect the Spark runtime services in IBM Cloud Pak for Data with IBM Watson Studio or IBM Analytics Engine to your Data Engine instance as the Hive Metastore that serves as a relational table catalog for your Spark SQL jobs. The diagram below provides a visual representation of this architecture.

The following diagram visualizes this architecture:

Using Data Engine with Spark aaS in IBM Cloud

Utilizing Data Engine as your table catalog is a straightforward process when leveraging the pre-existing Spark runtime services in IBM Cloud and IBM Cloud Pak for Data, as the necessary connectors to Data Engine’s Hive Metastore are already integrated out-of-the-box. The following PySpark code can be used to configure a SparkSession object to work with your specific instance of IBM Data Engine:

instancecrn = <your Data Engine instance ID>
apikey = <your API key to access your Data Engine instance>
from dataengine import SparkSessionWithDataengine
session_builder = SparkSessionWithDataengine.enableDataengine(instancecrn, apikey)
spark = session_builder.appName("My Spark App").getOrCreate()

With the SparkSession object configured, you can proceed to use it as normal, such as retrieving a list of the currently defined tables and executing SQL statements that query these tables.

spark.sql('show tables').show()
spark.sql('select count(*), country from my_customers group by country').show()

Using Data Engine with your custom Spark deployments

If you are managing your own Spark runtimes, you can still utilize the same mechanisms outlined above. However, before proceeding, you must first establish the connector libraries for Data Engine within your Spark environment.

Install the Data Engine SparkSession builder

  1. Download the jar file for the SparkSession builder and place it in a folder in the classpath of your Spark installation (normally you should use the folder “user-libs/spark2”).
  2. Download the Python library to a local directory on the machine of your Spark installation and install it with pip:
pip install --force-reinstall <download dir>/dataengine_spark-1.0.10-py3-none-any.whl

Install and activate the Data Engine Hive client library

  1. Download the Hive client from this link and store it in a directory on your machine where you run Spark.
  2. Specify that directory name as an additional parameter when building the SparkSession with Data Engine as the catalog:
session_builder = SparkSessionWithDataengine.enableDataengine(instancecrn, apikey, pathToHiveMetastoreJars=<directory name with hive client>)

For additional information, we recommend consulting the Hive Metastore documentation for Data Engine. Furthermore, their Data Engine demo notebook is also available for download and use in your own Jupyter notebook environment or within the Watson Studio notebook service in Cloud Pak for Data.

Chapter 10 of the notebook contains a comprehensive setup and usage demonstration for utilizing Spark with Hive Metastore in Data Engine. Additionally, a brief demo of this notebook can be found at the 14:35 minute mark in the previously mentioned demo video for the “Modernize your Big Data Analytics with Data Lakehouse in IBM Cloud” webinar.

Conclusion

This article describes the new Hive Metastore as a Service capability in IBM Cloud, which provides a central component for building modern data lakes in IBM Cloud without the need for Day 1 setup or Day 2 operational overhead. To get started, simply provision an IBM Cloud Object Storage instance for your data and a Data Engine instance for your metadata to create a serverless, cloud-native data lake. From there, you can begin ingesting, preparing, curating, and using your data with the Data Engine service itself, or with your custom Spark applications, Analytics Engine service, Spark runtimes in Watson Studio, or any other custom Spark runtime that is connected to the same data on Object Storage and the same metadata in Data Engine.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in IBMTagged IBM, IBM Cloud ServicesLeave a Comment on According to IBM’s Metastore aaS there is No Lake Without Metadata

The Top 4 Reasons for Using IBM Security ReaQta as Your EDR Solution

Posted on March 1, 2023March 1, 2023 by Marbenz Antonio

IBM Security ReaQta - United Kingdom | IBM

EDR solutions such as IBM Security ReaQta can assist security teams in identifying “early warning signs” as cyber attackers become skilled in avoiding detection and rapidly encrypting the data of organizations.

As attackers become more rapid and elusive, it has become challenging to navigate a constantly changing threat landscape. Based on the IBM Threat Intelligence Index 2023 report, the time taken by attackers to execute ransomware attacks has decreased by 94% over the last few years, with what used to take months now taking only a few days. As a result, organizations must adopt a proactive strategy to keep up with the increasing speed of attackers.

The problem: Endpoint detection challenges in cybersecurity

The post-pandemic rise in remote work patterns has caused rapid growth and interconnection of endpoints, resulting in a unique set of cybersecurity issues. This new way of working has resulted in a surge in advanced threat activity, and security teams have had to deal with an increased number of alerts to investigate. Unfortunately, many of these alerts turn out to be false positives, leading to significant alert fatigue.

Security teams that are already stretched thin are left with minimal time to respond, making it difficult to protect endpoints from advanced zero-day threats. Without the appropriate endpoint detection and response (EDR) tools, preventing costly business delays can be challenging.

The fix: Amplifying your cybersecurity with EDR solutions

To provide a prompt and effective response, security teams must implement a robust endpoint security solution. This is because endpoint protection plays a crucial role in containing threats before devices are infected or encrypted by ransomware. Additionally, it offers support throughout various stages of the incident response process and fills in gaps left by traditional antivirus solutions by providing enhanced detection, visibility, and control, preventing widespread malware or ransomware damage.

The need: Accelerating your response to threats and improving efficiency within the SOC teams

Rapid detection of endpoint threats and malware reporting can significantly minimize the impact of an attack, leading to significant savings in terms of time and expenses. To develop efficient responses to cyberattacks, defenders can leverage EDR tools to achieve the following:

  1. Leverage AI and security automation to speed response to threats.
  2. Improve efficiency within the Ops teams to save both time and expenses.
  3. Get high-fidelity alerts that help reduce analyst workloads.
  4. Gain deep visibility into all processes and applications running on all endpoint devices.

IBM Security ReaQta is a sophisticated and user-friendly EDR solution that can aid in all of these areas. Let’s explore how it works.

1. Leverage AI and security automation to speed response to threats

By utilizing artificial intelligence (AI) and machine learning (ML) technology, ReaQta provides a high degree of automation in detecting and addressing endpoint threats. It can swiftly identify and resolve both known and unknown threats or fileless attacks in near real time. To gain a better understanding of ReaQta’s malware detection and automated response capabilities, let’s take a closer look at how it functions.

ReaQta dashboard

IBM Security ReaQta provides an alert overview of your endpoint ecosystem.
IBM Security ReaQta provides an alert overview of your endpoint ecosystem.

The ReaQta dashboard is intentionally designed to be simple and straightforward, in contrast to other complex dashboards. It provides a minimalist and user-friendly interface that makes it easy to use. The home screen displays a comprehensive summary of alerts, indicating the status of all endpoint devices.

An alert is triggered

The behavioral tree triggers an alert on detecting any anomalies.
The behavioral tree triggers an alert on detecting any anomalies.

IBM Security ReaQta promptly detects anomalous activities such as ransomware behavior. If any abnormal behavior is detected, the system generates an automatic alert. The severity of the alert, which in this case is medium, is displayed in the upper left corner of the screen. The right side of the screen provides additional information about the alert, such as the cause of the trigger point, the affected endpoints, and how the threat is linked to the MITRE ATT&CK framework.

Investigating the alert

Security teams can quickly analyze if the threat is malicious or benign by clicking Alert details.
Security teams can quickly analyze if the threat is malicious or benign by clicking Alert details.

Analysts can quickly assess whether a threat is malicious or benign and determine if it is a false positive by clicking on the alert details page. This speeds up the response process and reduces alert fatigue, as analysts do not need to waste time and effort sifting through extensive event logs to pinpoint the source of the problem.

Visual storyline is automatically created as an attack unfolds.
A visual storyline is automatically created as an attack unfolds.

Whenever an alert is generated, a behavior tree is constructed, offering complete visibility into the alert and attack. This user-friendly and visually compelling narrative presents a chronological timeline of the attack, including the applications and behaviors that triggered the alert and how the attack unfolded. Security teams can easily access a comprehensive overview of the threat activity on a single screen, enabling them to make quick decisions.

Detailed behavioral analytics and full attack visibility

Full attack visibility ensures analysts understand the scope of the attack and respond accordingly.
Full attack visibility ensures analysts understand the scope of the attack and respond accordingly.

Detailed information about the launched applications is available by clicking on the circles in the behavioral tree function. Although nothing may appear suspicious at this stage, some attacks initiated through signed applications may elude antivirus or firewall software.

Simple behavior tree visualization for alert prioritization

Analysts can easily prioritize their search when looking for an alert.
Analysts can easily prioritize their search when looking for an alert.

To expedite analysts’ examination, ReaQta presents the threat activity through an uncomplicated behavior tree representation using circles and hexagons. Circles represent applications, while hexagons denote behaviors. Each shape has a different color: red indicates high risk, orange indicates medium risk, and yellow indicates low risk. These colors indicate the severity and assist security teams in prioritizing their search when investigating an alert.

2. Improving efficiency within the operations teams with ReaQta

The use of EDR security tools such as ReaQta can enhance the operational efficiency of security teams by allowing for swift and efficient threat remediation, process termination, and isolation of infected devices. In addition, ReaQta supports forensic analysis and reconstruction of the root cause of the attack, enabling operations teams to quickly remediate threats and restore business continuity.

Remediating and isolating threats with ReaQta

Quick view showing how many other endpoints were affected by the malicious activity.
Quick view showing how many other endpoints were affected by the malicious activity.

After identifying a malicious threat, analysts can use ReaQta to quickly respond and protect the system. They can access containment controls to triage the threat and create a blocklist policy that prevents the threat from running on other endpoints.

By checking the number of compromised endpoints, security teams can determine whether the threat has been isolated or is recurring. They can terminate the threats and isolate infected endpoints from the network, regardless of their location, such as Singapore, the U.S., the UK, Africa, and so on. If the endpoint is connected to the server, the malware can be terminated and added to the blocklist in real time.

Preventing similar threats in the future

Analysts can create workflows to counteract similar threats.
Analysts can create workflows to counteract similar threats.

With ReaQta, you can establish workflows that target specific threats, which can be automatically activated when a similar threat is detected in the future.

As part of the remediation plan, ReaQta offers the ability to choose and remove any dropped executables, filesystem, or registry persistence. Users can also select which endpoints to isolate and then close the alert.

3. Get high-fidelity alerts that help reduce analyst workloads

ReaQta is capable of producing alerts of high quality and can help in reducing investigation time from minutes to seconds by utilizing threat intelligence and analysis scoring. Analysts can quickly identify potential cyber threats by utilizing the metadata-based analysis to speed up triage. Additionally, ReaQta’s threat-hunting capabilities enable real-time infrastructure-wide searches for indicators of compromise (IOC), behaviors, and binaries.

Threat classification to help reduce false positives

Cyber Assistant learns from analyst decisions and helps reduce alert fatigue.
Cyber Assistant learns from analyst decisions and helps reduce alert fatigue.

After closing an alert, it is crucial for the analyst to determine whether the threat was malicious or benign as Cyber Assistant, an AI-based alert management system within the endpoint protection platform, constantly learns from the analyst’s actions.

The system gathers data and applies AI algorithms to constantly learn from threat patterns and identify similar threats. If a new threat exhibits telemetry above 85% similarity to a known threat, it leverages its learned behaviors to classify the new threat accordingly.

The knowledge gained by Cyber Assistant helps to decrease the number of false positives. As a result, it improves the accuracy of high alerts and reduces the workload of analysts, thereby minimizing alert fatigue and enhancing the efficiency of security teams.

4. Gain deep visibility into all processes and applications running on all endpoint devices

The NanoOS is a lightweight agent that operates at the hypervisor layer outside of the operating systems. It is intentionally designed to be undetectable, making it impervious to modifications, shutdowns, or replacements by malware or attackers.

NanoOS, which sits in the hypervisor layer and is undetectable, can be leveraged by security teams to covertly track the movements of attackers to comprehend their goals until the security team terminates their access. Once this is done, the ReaQta security solution can be implemented to remediate compromised devices without any disruption.

Conclusion

IBM Security ReaQta is an effective endpoint security solution that helps cybersecurity teams identify vulnerabilities. Although endpoint detection and response (EDR) solutions are not the only protection mechanism for threat detection, they should be the first mechanism, along with an extended detection and response (XDR) security solution, to identify suspicious behavior.

IBM Security ReaQta seamlessly integrates with QRadar SIEM, enabling organizations to have a more secure defense system that unifies protect, detect, and response capabilities, thereby improving their IT security against advanced cyberattacks.

ReaQta also offers a 24×7 managed detection and response (MDR) service that serves as an extension of your security team, ensuring that endpoint threats are contained and remediated as soon as they are detected.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in IBMTagged IBMLeave a Comment on The Top 4 Reasons for Using IBM Security ReaQta as Your EDR Solution

Hybrid Cloud Myths and Reality in the Modern Era

Posted on February 28, 2023February 28, 2023 by Marbenz Antonio

Why hybrid cloud and edge computing represent a new paradigm for innov

Exploring the crucial role of IBM zSystems in IBM’s hybrid cloud environment.

At times, industry colleagues may discuss the idea of moving away from the mainframe or raise doubts about the ongoing relevance of the IBM zSystems platform in delivering unique value to their businesses. While public clouds, edge solutions, and distributed technologies all have important roles to play within a hybrid cloud setup, IBM zSystems remains crucial for numerous enterprise IT environments, including IBM’s own. This is due to its ability to offer the necessary performance, resilience, security, sustainability, and efficiency required for handling mission-critical workloads.

In this context, they help to debunk certain misconceptions and elucidate the significant role that IBM zSystems currently plays and will continue to play in IBM’s hybrid cloud setup, both in the present and the future.

Myth: The mainframe is no longer core to IBM’s own enterprise IT portfolio or strategy

Truth: IBM zSystems platform is a fundamental component of our hybrid cloud strategy, and as an organization, we heavily rely on it currently. This dependence is not only because they produce and distribute zSystems, but because it is, without a doubt, the most suitable platform for the tasks at hand. IBM operate nearly 600 applications, with at least one segment running on IBM zSystems, which constitutes over 65% of all financially critical applications. Business-critical functions like quote-to-cash, finance, and HR operations run on z/OS, z/VM, and Linux on zSystems. This includes IBM’s integrated enterprise resource planning (iERP) solution, our global credit system, accounting information repository, global logistics system, and our common ledger system.

Myth: The mainframe is expensive

Truth: The overall cost of maintaining applications on IBM zSystems can be lower than migrating to alternative platforms, owing to the platform’s extended lifespan, high utilization, and backward compatibility. By adopting a technology business management (TBM) approach, we are actively showcasing that applications hosted on zSystems can exhibit superior performance, enhanced security, and lower total cost of ownership in a contemporary operating environment. Numerous clients have also realized the benefits of utilizing existing capacity on IBM zSystems, which results in a reduction in public cloud expenses. Additionally, we employ “intelligent workload placement” by moving containerized application workloads across different architectures to optimize performance, sustainability, and cost-effectiveness. This approach forms the core of a modern, efficient hybrid cloud setup.

Myth: Modern applications don’t run on the mainframe

Truth: IBM zSystems provides a secure, cost-effective, and energy-efficient platform for hosting contemporary applications. By incorporating Red Hat OpenShift and Red Hat Enterprise Linux on IBM systems, alongside continuous integration and continuous deployment (CI/CD) pipelines and Infrastructure as Code, it presents a compelling and contemporary environment that harnesses the expertise of agile developers.

Myth: If “cloud” is the destination, we should move applications off the mainframe

Truth: Absolutely not! Within a hybrid cloud ecosystem, the placement of application workloads must be optimized to cater to operational needs that balance factors such as sustainability, performance, agility, reliability, and cost-effectiveness. IBM zSystems outshines other platforms in several areas, including Infrastructure as Code, transparent operating system patching without application downtime, enhanced security, increased reliability, and a reduced environmental footprint. With the incorporation of CI/CD pipelines for applications on IBM zSystems, it bears a striking resemblance to operations on other cloud architectures.

Myth: We need specialized and antiquated skills to use the mainframe

Truth: Contemporary tools lessen the demand for specialized expertise in maintaining outdated technologies still used by certain business applications. Notably, IBM zSystems supports a range of modern technologies and tools, such as Python, YAML, Java, Kubernetes, and Ansible. To make the most of IBM zSystems’ capabilities, it’s necessary to possess proficiency in these skills, which are becoming increasingly essential in our team and the industry as a whole. By combining modern skills with the platform’s cutting-edge features, we can achieve all the benefits that a pivotal component of a modern hybrid cloud operating environment has to offer.

Myth: The mainframe is old

Truth: Would you regard a 2023 Ferrari as outdated? Neither would I. Despite being renowned for their backward compatibility, the latest IBM z16 and IBM LinuxONE 4 (specifically for Linux-only environments) are equipped with cutting-edge features such as embedded AI processors, pervasive encryption, and quantum-safe cryptography. With these innovations, contemporary IBM zSystems boast unparalleled performance, availability, and security, which have been trusted by renowned global entities like banks, insurance companies, airline reservation systems, and retailers, owing to their demonstrated transaction processing prowess and resilience.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in IBMTagged IBMLeave a Comment on Hybrid Cloud Myths and Reality in the Modern Era

Networking for the Modern Enterprise: Application-Centric

Posted on February 28, 2023February 28, 2023 by Marbenz Antonio

Networking for the application in a cloud-centric world - TechHQ

In today’s business landscape, companies utilize applications and services that are dispersed among on-premises infrastructure, multiple cloud environments, and intelligent edge networks.

As we approach 2025, the majority of enterprise data – approximately 75% – is projected to be generated and managed at the edge. Furthermore, due to the growing adoption of a hybrid work model, enterprise application users are increasingly mobile.

The changing demands of applications and users are beyond the scope of traditional networking models, such as conventional SDN solutions. As a result, NetOps and CloudOps teams are under mounting pressure. In the absence of the ability to provide networking for applications in a detailed manner and with limited tools to enforce policies in a dynamic setting, NetOps teams are finding it challenging to maintain fine-grained control of the network and promptly address the evolving requirements of the applications.

Understanding the obstacles in the way

To ensure a seamless experience for customers and employees, DevOps teams within the Enterprise Line of Business (LoB) are tasked with maintaining the performance and reliability of their applications. In this context, the way applications and services are interconnected is just as crucial as the applications themselves. Regrettably, NetOps teams are often brought in towards the end of the application development process, making networking an afterthought.

According to feedback from IBM’s customers, the three most common IT connectivity challenges that lead to deployment delays are:

  1. Multi-dimensional connectivity: Complicated processes involving DevOps, NetOps, and SecOps teams are resulting in prolonged provisioning times for establishing detailed connectivity between applications and services. It is not uncommon for network provisioning to take several weeks.
  2. Network agility: DevOps teams expect network automation to offer the same level of agility as they experience in the compute and storage domains. Unfortunately, network automation is frequently not as mature as computing and storage automation and falls short of fulfilling expectations.
  3. Lack of visibility caused by silos: The Operations (Ops) teams frequently operate independently, with their performance metrics and Service Level Agreements (SLAs) existing in isolation from one another. Consequently, troubleshooting degraded application performance can become convoluted and protracted.

Are we ready for DevOps-friendly, application-centric connectivity?

Reevaluating connectivity from an application standpoint can provide a solution to the aforementioned challenges, allowing DevOps teams to achieve self-service connectivity under the supervision of the NetOps and SecOps teams. By seamlessly integrating connectivity provisioning as an extra step in the CI/CD pipeline, DevOps teams can view the network as an additional cloud resource, resulting in straightforward, scalable, smooth, and secure application-level connectivity in any environment, whether on-premises, at the edge, or in the cloud.

This model also ensures consistent policy administration throughout all aspects of IT, significantly streamlining policy management and improving security measures.

By conceptualizing networks within the framework of applications and merging NetOps with DevOps and SecOps, enterprises can experience significant advantages, including:

  • Seamless auto-discovery across applications and infrastructure resources.
  • Single centralized management and policy control with clear mapping between business context and underlying network constructs.
  • The ability to make the network “follow the application” when services move across locations.
  • Elimination of silos between different Ops teams.
  • “Built-in” zero-trust security architecture owing to the ability to operate and connect at an individual microservice level, drastically reducing the attack surface.
  • Simplification of networks owing to the clear separation of application-level connectivity and security policies at the overlay, thereby resulting in a highly simplified underlay

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in IBMTagged IBMLeave a Comment on Networking for the Modern Enterprise: Application-Centric

Posts navigation

Older posts

Archives

  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • March 2020
  • December 1969

Categories

  • Agile
  • APMG
  • Business
  • Change Management
  • Cisco
  • Citrix
  • Cloud Software
  • Collaborizza
  • Cybersecurity
  • Development
  • DevOps
  • Generic
  • IBM
  • ITIL 4
  • JavaScript
  • Lean Six Sigma
    • Lean
  • Linux
  • Marketing
  • Microsoft
  • Online Training
  • Oracle
  • Partnerships
  • Phyton
  • PRINCE2
  • Professional IT Development
  • Project Management
  • Red Hat
  • SAFe
  • Salesforce
  • SAP
  • Scrum
  • Selenium
  • SIP
  • Six Sigma
  • Tableau
  • Technology
  • TOGAF
  • Training Programmes
  • Uncategorized
  • VMware
  • Zero Trust

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

home courses services managed learning about us enquire corporate responsibility privacy disclaimer

Our Clients

Our clients have included prestigious national organisations such as Oxford University Press, multi-national private corporations such as JP Morgan and HSBC, as well as public sector institutions such as the Department of Defence and the Department of Health.

Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
  • Level 14, 380 St Kilda Road, St Kilda, Melbourne, Victoria Australia 3004
  • Level 4, 45 Queen Street, Auckland, 1010, New Zealand
  • International House. 142 Cromwell Road, London SW7 4EF. United Kingdom
  • Rooms 1318-20 Hollywood Plaza. 610 Nathan Road. Mongkok Kowloon, Hong Kong
  • © 2020 CourseMonster®
Log In Register Reset your possword
Lost Password?
Already have an account? Log In
Please enter your username or email address. You will receive a link to create a new password via email.
If you do not receive this email, please check your spam folder or contact us for assistance.