• Courses
    • Oracle
    • Red Hat
    • IBM
    • ITIL
    • PRINCE2
    • Six Sigma
    • Microsoft
    • TOGAF
    • Agile
    • Linux
    • All Brands
  • Services
    • Vendor Managed Learning
    • Onsite Training
    • Training Subscription
  • Managed Learning
  • About Us
    • Contact Us
    • Our Team
    • FAQ
  • Enquire

OUR BLOG


Category: VMware

Top Performing CIOs’ Key Priorities

Posted on September 29, 2022 by Marbenz Antonio

5 priorities for CIOs in 2021 | The Enterprisers Project

CIO responsibilities have been greatly influenced by software-fueled changes in the competitive landscape over the past ten years, along with global disruptions and consumer demands for safe, always-on digital experiences. Executive-level vision is now more important than ever as innovation speed rises, application delivery complexity increases, and attack surfaces change. Modern CIOs are in charge of making these important decisions in a world where one action, whether good or bad, can have a big impact. The statement “There’s no worse moment to be an average CIO” was made frankly by McKinsey Digital.

From IT Leader to Growth Driver: A New Breed of CIO

The battle to create a successful multi-cloud strategy and entice top personnel is fierce, but the chances of succeeding in the application economy are limited, and top CIOs aren’t waiting around. Instead, they are approaching tech-driven growth with the knowledge that customer experience (CX) and developer experience (DevEx) are inextricably linked, and that you cannot have one without the other. As a result, a new generation of CIOs is the advantage of multi-cloud is clear as innovation picks up speed. No other arrangement enables businesses to effectively scale, reduce reliance on any one vendor, and increase organizational resilience while using the distinct attributes of many cloud providers. However, contemporary CIOs are aware of the challenges associated with managing multi-cloud settings, and they welcome platforms that not only provide real-time health insight across multiple clouds but also watch for and alert to security anomalies in such environments. These solutions also enable shared visibility for security, operations, and development teams, encouraging collaboration important sign of DevEx-focused settings. laser-focused on developing teams, resources, and procedures that push the envelope.

Reaching for the Clouds

The advantage of multi-cloud is clear as innovation picks up speed. No other arrangement enables businesses to effectively scale, reduce reliance on any one vendor, and increase organizational resilience while using the distinct attributes of many cloud providers. However, contemporary CIOs are aware of the challenges associated with managing multi-cloud settings, and they welcome platforms that not only provide real-time health insight across multiple clouds but also watch for and alert to security anomalies in such environments. These solutions also enable shared visibility for security, operations, and development teams, encouraging collaboration important sign of DevEx-focused settings.

Innovation on the fly

Since CIOs have firsthand experience with how disruptive global crises can be, it is more important than ever to be prepared. At the moment, the globe is experiencing many crises on a variety of geopolitical and health-related fronts. Lessons learned from extreme lifts and shifts—some beneficial, some disastrous—serve as a barometer for how crucial it is to create a resilient company. Future-oriented CIOs aren’t concerned with getting teams back to the office in a world that has been irrevocably altered by the forced migration to remote work. They are putting even more focus on portable, secure IT environments that enable work from anywhere, and they are working with recruitment counterparts to make the most of this feature to bring in and keep top talent for all levels of their technological ecosystem.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in VMwareTagged VMwareLeave a Comment on Top Performing CIOs’ Key Priorities

Methods for Picking a Cloud Service Provider for VMware Workloads

Posted on September 12, 2022September 15, 2022 by Marbenz Antonio

What is cloud computing? Everything you need to know now | InfoWorld

Deciding to move to VMware-on-Cloud from an on-premises or cloud-native environment is important.

You must benefit from cloud services’ flexibility and cost benefits, but what should you do when you’ve made the decision to do so?

Understanding your present environment and future business goals is essential for a successful transition to the cloud. Analyzing your contemporary unique surroundings and modeling it’ll assist your companion with a first-class cloud company that meets your desires and lays out a successful cloud adoption strategy.

5 major things to consider

Choosing where to host your VMware-on-Cloud workload is an important step in the cloud adoption process. Your requirements must be met by the cloud provider while also experiencing a substantial reduction in the total cost of ownership (TCO).

Here are some important things to think about:

  1. Level of control: Businesses look to the cloud for security, compliance, and privacy. You can only manage your data privacy, security, and compliance to the extent that the cloud environment allows you to. Do you desire the VMware environment’s unconstrained flexibility in order to reduce CapEx while utilizing your existing operational strength?
  2. Regional availability: Knowing your scalability and growth ambitions is an important factor in selecting a cloud provider. Are you looking for a local regional concentration or a global scalable presence?
  3. Maximum cluster size: You can control uptime, greater availability, and serviceability with the help of the proper cluster size. This element is directly related to the quantity, size, and chattiness of the tasks. Do you have a lot on your plate? Do you have numerous interconnected groupings of workloads? The ideal cluster size will be determined by your workloads. This will enable you to select the cloud that offers you the ideal-sized cluster.
  4. Network traffic: In a hybrid cloud system, the needed network bandwidth is inversely proportional to the anticipated network traffic. The price of transporting data depends on the network bandwidth. Who is paying for these expenses? How much does it increase your TCO in terms of ongoing network costs? This is an important consideration when choosing a cloud service.
  5. The flexibility of host models: The required number of hosts determines your ongoing expenses in a cloud system. It is important to understand whether the cloud provider offers you a wide range of models, flexible host sizes, and fine-grained host availability. This will lower your TCO and have a direct influence on your ongoing expenses.

Comparing the top cloud platforms

There are many different VMware-on-Cloud setups available from various cloud providers. To differentiate themselves from the competition, each cloud provider provides a variety of features and advantages.

Selecting the most cost-effective cloud platform requires an understanding of these aspects and how they relate to the specifics of your workload. You can quickly compare VMware-on-Cloud to all the top cloud platforms to determine which is the best option for your business.

Right-sizing your cloud environment through modeling

To visualize cost savings in the cloud, such as by correctly sizing your cloud infrastructure, detailed pre-migration cloud modeling is required. This can help you avoid wasting 30–60% of your cloud budget on over-provisioned virtual machines. It is time-consuming, laborious, and in some cases impossible to manually model your current on-premises system and prospective cloud environment.

A decent modeling SaaS solution allows you to run scenario analysis to identify the most cost-effective cloud platform to migrate to and gives an inventory of your on-premises infrastructure. Additionally, it makes it simple and quick to discover any hidden expenses associated with each cloud platform you are considering.

A CapEx and OpEx TCO analysis from the infrastructure and utilization, based on workload consumption, is provided by Akasia after a study of your current workload environment. It offers appropriate-size recommendations for various cloud providers, tailored to the particular usage.

Get a free evaluation of VMware application migration

The most important steps for a successful cloud adoption strategy that enables the seamless migration of your VMware workloads are cloud analysis, assessment, and modeling. You may plan the most cost-effective lift-and-shift workload migration with the aid of assessment and modeling, which will remove the uncertainty from cloud migrations.

The cloud service provider is a long-term partner. Your business’s modernization plan will be determined by your decision.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in IBM, VMwareTagged IBM, VMwareLeave a Comment on Methods for Picking a Cloud Service Provider for VMware Workloads

What’s Next for VMware Edge Compute: Let’s Go Over the Edge

Posted on September 5, 2022September 5, 2022 by Marbenz Antonio

Le cloud computing au service de la transformation numérique

With a sustained focus on and investment in edge computing, VMware is advancing into the future. Processing and managing data and applications near to where they are consumed and generated is the focus of this quickly expanding business.

The Edge is home to a sizable and expanding number of devices. The need for comprehensive and consistent Edge computing management outside of centralized data centers and clouds is on the rise as business drivers such as data transmission costs, lower latency response requirements, privacy laws, data governance, and 5G/6G rollouts continue to expand.

Long before VMworld 2021’s Vision and Innovation Keynote and Edge Breakouts, xLabs, a specialist team housed in VMware’s Office of the CTO, invested in and accelerated Edge technologies. We are eager to unveil more ground-breaking technology at VMware Explore 2022 to support our customers as they expand and manage their Edge workloads:

  • Assisting customers in overcoming the limitations of edge computing to Satisfy their objectives for autonomy while utilizing the tried-and-true ESXi workhorse (Project Keswick)
  • Modernizing virtualized world power grids
  • Using Kubernetes, machine learning at the edge (Project Yellowstone)
  • Balance of Workloads at the Edge (Work Balancer for Tanzu Kubernetes Grid Edge Workloads).

Project Keswick

At scale, even simple things frequently create complexity. The same applies to deployments. As the number of devices in an edge deployment increases to the hundreds of thousands or millions, governance and visibility issues also increase. What software versions, for instance, are these Edge devices running? When new versions are released, how do we update these programs? Additionally, due to their remote locations, these devices are frequently challenging to access, and updating each one takes time to provide uniform definitions for how infrastructures and workloads are run.

Project Keswick combines GitOps and ESXi, a dependable hypervisor. Version control and CI/CD are made available to us by GitOps, allowing us to document, manage, and provision infrastructure. We specify how the infrastructure should be configured and the workloads it should support as part of this flow using a YAML file.

Here is an example YAML file that defines a straightforward deployment. We would like to deploy 3 pods (replicas) to run the Nginx app using the most recent container image (hello-world: latest):

Each ESXi device linked to the git repository will be able to get this configuration update after this file is committed to source control (git). Endpoints pull their modifications down and update when they are prepared for an update.

The levels of Project Keswick are described in this architecture diagram. The Infrastructure and Workloads layers are specified through Kubernetes manifests, which are YAML files kept in a git repository. Project Keswick uses a listener on the git repository to track down manifest changes, and when the node is prepared to update, it executes the new instructions.

The use cases for Project Keswick were designed with intermittent connectivity in mind. Nodes won’t always be prepared or in the ideal location to upgrade, but when they are, they can download the most recent versions and perform an update themselves.

Virtualization of Workload in Power Substations

The necessity for innovation in the energy sector is highlighted by recent problems with utility grids and power outages. Particularly, power substations, or the places in-between where power plants and residences or businesses are located, are usually remote, unconnected, and unequipped to handle the rising demand for a bidirectional flow of power. When voltage transmission substations were first constructed, energy only flowed in one direction: from the power plant, through the substation, to residences and commercial buildings. Power substations in this concept reduce voltage to the proper levels for their destinations. Today, there is an increasing requirement for the grid to handle the bi-directional flow of electricity back to the grid as more renewable energy is produced at homes and businesses using options like solar panels.

To upgrade our power grids, VMware is making investments with Intel and Dell as our co-innovation partners.

The Power Substation Workload Virtualization project by VMware allows for the flow of energy in both directions. In the process, it efficiently gets the most out of physical hardware assets that run mission-critical software managing the flow of energy. Due to its fast-moving monitoring of electricity, this Virtual Protection Relay (VPR) program has highly stringent networking restrictions! Furthermore, testing the VPR program itself requires special equipment to deliver simulation data.

The testing environment we created to enable electrical substations to perform their VPR software resiliency checks is depicted in the architectural diagram below. To support real-time operating systems, the precision time protocol, and the parallel redundancy protocol for networking resilience, the Power Grid Virtualization project modified ESXi 7.0.3 at its core. With the help of this upgrade, we were able to support the VPR software and testing apparatus:

The Doble Power System Simulator is where the workflow in this diagram begins at the bottom. The VPR Services and Simulation Management & Troubleshooting host, which is run on servers with ESXi 7.0.3, receives simulated amp readings from this device across the actual and virtual network switches. It is important that VPR software can be extensively tested to ensure that it can identify variations in amps to prevent damage. Variations in amps have the potential to seriously harm power substation equipment.

This experiment demonstrated that ESXi is capable of supporting parallel redundancy protocol, precision time protocol, and real-time operating systems. So that electricity substations are better prepared for future transformation, we are eager to keep offering our solution for running and testing VPR software with partners and customers.

Project Yellowstone

Now let’s talk about machine learning (ML) at the edge, which is dramatically changing global infrastructure and business. For instance, ML at the Edge is having a huge impact on the automotive industry. ML algorithms are fitted with thousands of sensors that collect a lot of data and are utilized in driverless vehicles, electric vehicles, and other innovative models. The sensors gather data and create a “picture” of the surroundings of the car, including pedestrians, the state of the road, traffic signals, and even the driver’s eye movement. As a result, there is a lot of data that needs to be processed quickly at the Edge due to volume, latency, and data protection restrictions.

IT administrators are experiencing a steep learning curve as firms use artificial intelligence (AI) and machine learning (ML) to automate more of their tasks. There is no guarantee that an ML task with inferences will match up with the correct node because of the wide diversity of accelerators and infrastructure that workloads are run on, which could result in failure or inefficient workloads that need time-consuming troubleshooting.

With Project Yellow Stone, we developed heterogeneous Edge AI acceleration for Kubernetes-based deployments. Recognizing and comprehending workloads at the Edge delivers speed, agility, and security to enable applications built with AI and ML.

To improve AI and ML tasks, Project Yellowstone makes use of cloud-native ideas. Several popular graph compilers, such as

  • Project Yellowstone’s Apache TVM with Intel OpenVINO will allow users to:
  • Set up workload clusters with the appropriate accelerators on the correct node.
  • Dynamically auto-compile and tune ML inference models
  • Utilize the accelerators that would be most effective for the workload.

Tanzu Kubernetes Grid Edge Workloads Work Balancer

Additionally, we are spending money on developing methods for balancing workloads at the edge. The advantages of using ML and other time-sensitive procedures can be successfully realized by ensuring that load balancing at the Edge is properly orchestrated.

Customers usually deploy Kubernetes on bare-metal devices rather than in virtual machines at the Edge due to resource constraints and non-isolation requirements. There are a couple of issues with this strategy, though:

  • For bare-metal clusters, Kubernetes does not provide a network load balancing mechanism.
  • The only load balancing options in Kubernetes are the NodePort or external IPs services.

Both of these choices are not the best. NodePort’s port range is constrained and requires users to have direct access to the node, neither of which is secure. To use ExternalIPs, users must explicitly give an IP to each node, which is unreliable and would require manual intervention if a node failed. Due to a shortage of IT personnel or sporadic network connectivity, manual repair is frequently ineffective.

With the help of this project, users can build a load balancer service just like they would in the cloud by using software-defined cloud management and Edge workload balancer for Kubernetes clusters deployed on bare metal.

Performance and execution are enhanced by developing a workload management capability at the Edge. Having the right tools, like load balancing, to maximize our resources becomes increasingly important when we apply techniques like ML to solving issues in compute-constrained environments at the Edge.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in VMwareTagged VMwareLeave a Comment on What’s Next for VMware Edge Compute: Let’s Go Over the Edge

Acquiring New Skills to Operate in the Multi-cloud Universe

Posted on September 5, 2022September 5, 2022 by Marbenz Antonio

Cloud Migration – A game changer for 'digital first' enterprises - CRN - India

This week, VMware Explore is in full swing. We want to highlight some of the important issues and discussions took on on-site in case you missed them.

We’ve already heard a lot about the IT industry’s direction in terms of multi-cloud, contemporary applications, and hybrid work. But how can IT managers support their employees’ development to meet the demands of this modern era? What areas should they focus on upskilling and training in, and how could they best introduce a new technical process to their team?

Dimitris Karabinis from DXC Technology explored these issues and contrasted best practices and methods for enhancing current team skill sets at our collaborative Explore session on upskilling.

We share a condensed version of the Explore session in the Research and Insights Brief below. Find out more about how specific team requirements from throughout the business are driving the development of new multi-cloud specialties and which skill sets are influencing the next generation of IT-

The top five skill sets that I&O Teams will prioritize in 2022 are:

  1. CI/CD for operations
  2. Backup, recovery, and resilience of protected data across platforms
  3. Managed cloud database services and optimized administration
  4. Application performance monitoring
  5. Cloud cost management and optimization

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in VMwareTagged VMwareLeave a Comment on Acquiring New Skills to Operate in the Multi-cloud Universe

The Best Cloud to Use for Data Sovereignty

Posted on September 1, 2022 by Marbenz Antonio

Data Sovereignty Is Complex: Ask These Five Questions Before Moving Your  Data to the Cloud | by Virginia Backaitis | Digitizing Polaris

Sovereign Cloud business estimated TAM is $60B by 2025, in no little part due to the rapid rise of data privacy legislation (now 145 nations have data privacy laws) and the difficulties of compliance in highly regulated industries, as recently highlighted at VMware Explore US.

VMware is delivering on our Sovereign Cloud position: Sovereign Security, Sovereign Compliance, Sovereign Control, Sovereign Autonomy, and Sovereign Innovation as the demand to monetize data grows and nations strive to capture the true value of data.

Let’s now examine how a company can better abide by data sovereignty regulations by selecting the appropriate cloud architecture.

Most companies now store at least some of their data in the cloud. The flexibility, scale, and computing power offered by the cloud are better than those of conventional on-premises data centers. Public clouds are well-known for their large storage capacities and low costs, however, some businesses have begun removing their data to comply with rules. In regulated industries, 81% of decision-makers have returned all or part of the data and workloads from public clouds. While some have relocated their data back on-site, others use a combination of public and private clouds. In the end, securing and realizing national data has never been a more important component of cloud construction. Choosing the best Data Sovereignty solution has become a hot topic due to the combination of growing country legislation, including compliance with the US Cloud Act, EU’s GDPR, China’s Personal Information Protection Law, and data privacy laws in 132 countries with an annual rise of 10%.

Let’s examine the most common kinds of cloud architectures to better comprehend why a firm could select one cloud model over another:

  • Public – Infrastructure and services for on-demand computing that are shared by several businesses on the open Internet and are controlled by a third-party provider. Generally speaking, public clouds are multi-tenant, which means that numerous users share a single server that has been partitioned to prevent unwanted access. Public clouds provide huge scale for little money.
  • Private – An organization’s infrastructure is reserved for a single user. Private clouds can be hosted by private cloud providers, third-party facilities, or an organization’s own data center. Due to restricted access, private clouds are typically more secure than public ones and can satisfy legal requirements for data protection and sovereignty. But to set them up and keep them running, more resources are needed.
  • Community – shared cloud that connects numerous businesses or employees for collaboration. This might be a number of private clouds linked together to enable data interchange. These are widely employed by regulated businesses where public clouds are not acceptable, but because numerous parties are involved, they are challenging to set up.
  • Government – a specific private or public cloud type created for governmental organizations to uphold their control and sovereignty.
  • Multi-cloud – utilizing many public clouds to benefit from various features. Some services may be hosted by an organization in one cloud and others by a different provider. The volume of data and access make this approach the one with the greatest security risk.
  • Hybrid – a combination of open and closed clouds. The word is occasionally also used to refer to a mix of public cloud and on-premises private data centers.

Public clouds are acceptable for public data that is exempt from data sovereignty regulations, but for total compliance, a hybrid or other more private solution is required. Private clouds can satiate the needs of data sovereignty, but they require specialized data centers, either run by the company itself or by a provider utilizing specialized technology. It can take a lot of money and time to do this. The level of security or compliance required to be sovereign may not be included in the simplest or off-the-shelf option. Jurisdictional control, local monitoring, data portability, and customizability, to name a few, are important considerations.

A solution created expressly to satisfy data sovereignty needs is a sovereign cloud. Consider this a hybrid cloud that includes some of the best aspects of both public and private clouds. Smaller, local, multi-tenant cloud service providers with extensive experience run them. A sovereign cloud offers private cloud advantages for data sovereignty without the associated IT hassles.

In hybrid cloud architecture, a sovereign cloud can be employed in addition to the public cloud. While non-sensitive data and services might reside in the public cloud, data and services subject to data sovereignty regulations would reside in the sovereign cloud. To maintain compliance, the data exchange across various clouds needs to be carefully managed.

Finding a sovereign cloud provider that can be customized, flexible, and easy to use is essential. You need to be able to audit operations and access to make sure compliance is maintained. Data residency and sovereignty needs can be satisfied by local, self-attested sovereign cloud providers by effectively implementing and constructing residency requirements. Understanding cross-border limitations and jurisdictional control are also necessary to handle privacy issues without involving remote data processing. At the end of the day, true sovereignty ensures that other jurisdictions are unable to assets authority over data stored beyond national borders; fostering national data interest and growth.

Compared to a regular public cloud, true sovereign clouds demand a higher standard of data and metadata security and risk management. Along with the data itself, metadata—information about the data such as IP addresses or hostnames—must be protected. Providers of VMware Sovereign Clouds give openness regarding security precautions, including cyber defenses and physical security in the data center.

Providers of VMware Sovereign Cloud are…

  • providing best-in-class IaaS security and compliance with trusted approved partner
  • specialists in both local platform development and local data protection laws
  • ability to offer flexible, configurable, cost-effective (TCO) solutions for data choice and control.
  • being able to adapt to changing customer needs and provide a full, future-proof solution

Customers seeking sovereign solutions look to VMware Sovereign Cloud providers for their knowledge and openness, which ensure security and compliance with regional data privacy and sovereignty requirements. Data security and compliance are made possible by this knowledge and transparency, which become invaluable.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in VMwareTagged VMwareLeave a Comment on The Best Cloud to Use for Data Sovereignty

Why does every organization require Multi-factor Authentication?

Posted on July 5, 2022July 26, 2022 by Marbenz Antonio

The Reason for Pushing Multi Factor Authentication (MFA) - Pratum

Mobile single-sign-on and Workspace ONE UEM device compliance are critical approaches for securing devices that access corporate applications. However, in the absence of user-facing authentication, these applications are vulnerable to simple attacks on compliant devices.

The daily number of cybercrimes reported to the FBI has tripled since the coronavirus outbreak began in 2020. Users are increasingly being directly targeted for their access to sensitive information, capitalizing on the fact that more people are susceptible due to working from home or public settings on less secure networks. Attackers are also growing more effective at phishing, sending out bogus emails from apparently reliable sources.

Can you recognize that this is a phishing email, for example?

A cautious user or skilled employee may discover that holding over the “change password” button redirects to a suspicious URL; otherwise, anyone could easily fall victim to a phishing attack. This is why all businesses require multi-factor authentication to improve resistance to phishing while also limiting end-user access to work apps.

Many of our customers utilize device management with certificate authentication to provide users with seamless, authorized access to compliant devices. What device management does not do is give the required authentication that the intended user is also the current user of the device.

While physical breaches from mishandling a corporate device may appear unlikely, keep in mind that it only takes one incident to cause significant economic interruption. Physical security breaches were the source of some of history’s most horrible cyberattacks. In 2008, it was a random USB stick picked up from a US military installation’s parking lot and loaded into a Department of Defense computer that disseminated malicious code throughout the US military’s networks. Consider the following scenario: an employee temporarily leaving their business laptop unlocked in an open coworking environment. What security levels does your business have in place to ensure that only the device owner has access to applications holding highly private company data?

Workspace ONE Access’s goal with mobile single-sign-on is to make access feel magically simple, but we often require another element for identity – something you have, something you know, or something you are – to assure a higher level of authentication.

Implementing multi-factor authentication does not have to be difficult or detrimental to the user experience! Workspace ONE Access includes a set of built-in multi-factor authentication techniques that can be configured and utilized for a variety of customer use cases to make installing multi-factor authentication straightforward. By providing solutions that are compatible with managed devices and personal devices, Workspace ONE Access is bringing multi-factor authentication capabilities to all users on any device.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in VMwareTagged VMwareLeave a Comment on Why does every organization require Multi-factor Authentication?

A Dynamic Approach to Zero Trust Security for Government Agencies

Posted on July 1, 2022July 26, 2022 by Marbenz Antonio

What is Zero Trust? Federal Agencies Embrace Cybersecurity Innovation

There has been a lot written on Zero Trust, what it is and isn’t, and why it’s so difficult to simply “switch on.” The majority of these pieces center on identity and authorization, and the fact that implicit trust is no longer acceptable. All of this is true, but ultimately, we need to focus on the dynamic nature of the usual organization and how Zero Trust addresses it with AI and ML mixed with automation and orchestration.

Dynamic Organizations Require Dynamic Security

Government enterprises are more complex and changing than ever before. Users move regularly based on their position and environment, and employment is no longer defined by a physical location but rather by the worker’s location. Devices come in a variety of shapes and sizes, and they can move as much as employees do, using numerous wireless technologies and networks that the IT department cannot control or protect. Networks no longer have perimeters, and the process of delivering work to users has resulted in an ever-changing network (and thus threat) landscape. Workloads that used to stay in one place for years in highly secure data centers may and will shift from cloud to cloud to meet business needs.

Simply, an agency is never static, so why should security be?

Self-Aware and Self-Healing

The DHS Cyber and Infrastructure Security Agency’s (CISA) Zero Trust Maturity Model was announced in the summer of 2021, and it gave a rational approach for government entities to look at delivering Zero Trust, as well as a maturity level to help identify where agencies were on that journey. The Foundation of Zero Trust concept is made up of five pillars: identity, device, network, application/workload, and data, all of which are supported by visibility, analytics, automation, orchestration, and governance. The idea is that security capabilities in each pillar would increase over time based on the maturity level indicated in the text, eventually leading to a Zero Trust architecture.

The maturity level descriptions, which clearly define the underlying value of Zero Trust, are the cornerstone of the Zero Trust Maturity Model. To mature, the Zero Trust security model must be intelligent in comprehending the present landscape and dynamic in reacting to events affecting the environment’s security posture in real-time. To put it another way, the Zero Trust paradigm needs to be self-aware and self-healing. Consider the following examples:

  • As part of a Halloween prank, your website was hacked. With analytics, the website detects the change, recognizes that it is not normal, and returns to the original view in seconds – with no human intervention.
  • Due to an error in a routine upgrade, an internal application violates the security policy. The mismatch is identified, noted, and upgraded as part of your regular automated assessments to bring you back into compliance.
  • An IP security camera put in a remote building is delivering communication to several network devices for unclear reasons. This conduct is recognized as unusual, and the device is quarantined immediately, with an alert sent to the security operations center (SOC) for review.

Self-awareness and self-healing are simple concepts to grasp, but difficult to implement. Attaining this automation necessitates the use of a variety of technology and capabilities that are properly choreographed and operate on time:

  • Infrastructure instrumentation and data from devices provide real-time visibility into what is going on.
  • Analytics across multiple tools to determine what is normal and good against what is abnormal should be addressed.
  • Artificial intelligence and policy engines will be used to make decisions on what should be done to solve the situation in the most efficient and simple possible terms.
  • Configuration and automation tools that operate on the systems and fix undesirable behavior in real-time while logging and alerting the human supervisor.

Zero Trust tools

Building Tools for Zero Trust Maturity

The industry is centered on self-awareness and self-healing. Legacy security focused on addressing known dangers and preventing them from affecting systems. We are now focusing on 0-day threats, which are essentially unknown to systems and so have no known preventions.

Anomalies that may be indications of malware or malicious activities can be spotted using visibility tools and analytics. Working backward from the symptom, the underlying cause can be identified and remedied. Endpoint detection and response (EDR) technologies enable this on endpoints, whereas extended detect and response (XDR) products use telemetry from both endpoints and network or cloud systems to detect anomalies and drive a remediation process. These tools have embedded intelligence in the form of AI systems that aid in providing the response, which is then performed by a configuration and orchestration system that may or may not be embedded. These are only two examples of how the industry is attempting to create a self-assessing, self-healing architecture.

As you go toward a mature Zero Trust architecture, seek tools and technologies that can provide the telemetry, automation, and intelligence required to resolve problems in your systems that are not yet known.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in VMwareTagged VMwareLeave a Comment on A Dynamic Approach to Zero Trust Security for Government Agencies

Why Is Security So Difficult? (As Well as What State and Local Agencies Can Do)

Posted on July 1, 2022July 26, 2022 by Marbenz Antonio

16 Essential Apps for Ironclad Online Privacy | PCMag

It all usually comes down to protecting your people, assets, and data when it comes to security.

Everything sounds so straightforward when you put it in those terms. However, when seen from the standpoint of state and local governments, things become more complicated.

A Wide Selection of Cybersecurity Products

For example, how many software products are included in the three categories mentioned above (people, assets, and data)? Nearly half of state and local government IT leaders say they use separate products to protect their assets, which range from laptops to mobile devices to servers on-premise in the data center and in the cloud, as well as the plethora of products used to control who can access the systems, applications, and data that power government services to citizens. This is true even for smaller municipalities and county governments. The large number of software packages handling security is understandable given the various teams involved in enterprise security.

A Lack of Skilled Staff

Next, consider how many personnel in those state and municipal governments have as their primary and sole task controlling the organization’s security. Security management can range from responding to security problems to teaching people within the government to implementing and managing the numerous software solutions that handle security. Many of the larger state governments may have as many as five to fifteen people with security duties. However, many of the smaller local municipal and county administrations may have two or three. Security specialists typically have several job possibilities, and there aren’t enough skilled experts to go around.

Budgets are tight, but prospective funding under the IIJA may provide some help

State and municipal government budgets are constrained, particularly in smaller cities and counties. There is never enough money to do all of the necessary tasks. Taken together, the sheer number of security products, a scarcity of well-qualified professionals, and insufficient security expenditures pose substantial hazards to the local government organization.

Enter the Bipartisan Infrastructure, Investment, and Jobs Act (IIJA), which was signed into law by President Biden in November 2021. The IIJA’s State and Local Cybersecurity Grant Program gives $1 billion in grant funds to state, local, and tribal governments to address cybersecurity risks and threats to information systems. While the money will flow through the states, the majority of it must go to local governments, according to the rules. The Notice of Financial Opportunity (NOFO) has not yet been published, although it is scheduled to be released sometime in the summer of 2022. While this may appear to be a windfall for local governments and Tribes, keep in mind that the United States has roughly 90,000 local government units. If the funds are allocated equally among them, each would receive approximately $11,000 – hardly much money to defend people, assets, and data from security concerns.

How to Plan Your Cybersecurity Strategy for IIJA Payment

What can a state, city, or local government do to protect its organization from increasingly complex cyber-attacks and to prepare for the IIJA’s release of funds? A plan is always the first step. When you’re dealing with several software products, multiple teams, and limited employees, creating a plan may seem overwhelming. However, there are numerous businesses and government websites that can assist, or smaller cities and counties can look to the state for their plan. Though it may appear to be Security 101, having a cybersecurity strategy and roadmap is one of the top five most important projects across governments, cities, and counties.

A cybersecurity plan is also necessary to prepare for the IIJA’s release of funds so that you can apply for your fair share as soon as the NOFO is issued. When applying for the grant, the following factors must be considered:

  • Every funding application must contain a security plan.
  • Understand your existing cybersecurity posture in relation to nationally recognized cybersecurity frameworks such as the NIST Cybersecurity Framework. This approach does not recommend specific technology for use, instead of focusing on outcomes.
  • Focus on areas where governments may strengthen important and critical services, such as emergency IT systems, electoral systems, water utilities, or anything else that could generate headlines if it was compromised.
  • Make use of a Zero Trust architecture and mature implementation.

It is not easy to protect your people, assets, and data, but it is the foundation of trust that all government services must deliver to people.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in VMwareTagged VMwareLeave a Comment on Why Is Security So Difficult? (As Well as What State and Local Agencies Can Do)

Is Monitoring the Dark Web Worth It?

Posted on July 1, 2022July 26, 2022 by Marbenz Antonio

Dark web, cos'è, cosa si trova, pericoli e come accedere | ZeroUno

The Dark Web is a section of the internet that allows its users to remain anonymous. Unfortunately, this anonymity allows buyers and sellers to swap their content. Credit cards, medical records, personal information, user credentials, and more horrible data sets are offered here in cybersecurity. To maintain anonymity, cryptocurrency is commonly used as a payment mechanism.

Consider the Dark Web to be a retail center or mall. The ground floor is open to the public for browsing and shopping. The stores range from department stores to thrift shops and do not offer any specialized items or services. More specialized stores can be found on the mall’s upper floor, but you must be a member to enter. This affiliation is founded on trust, commerce, and credibility. Access is limited, but it is possible. Elite and premium stores are the next level up. These are not accessible by elevator or escalator, and entry is by invitation only.

The point is that the dark web is not easily accessible to the average internet user, and keeping an eye on what is going on is not a one-person job. This is why corporations hire specialized services to scan the dark web for critical information about them.

What are Dark Web Scanning Services?

The number of dark web monitoring services has increased tremendously in recent years, but are they worthwhile? What are the facts of this service, and why do some businesses believe it provides them with a false feeling of security?

Many businesses now provide Dark Web monitoring services, but there is widespread misunderstanding about how they work or whether they work at all. The following are examples of dark web monitoring:

  • Scanning services – every day, hundreds of thousands of hidden websites are scanned for information on your company.
  • Trading services – where they will trade on your behalf to verify data, obtain insights, or buy your data Banks, for example, may purchase credit cards issued by them in order to cancel them.
  • Collaborating services – discussing attack prospects, zero-day exploits, and intelligence sharing on forums

Do you think multiple services are worthwhile now that you’ve learned about them? Here’s what you should know.

What Is the Quality of Dark Web Scanning Services?

Despite some providers’ excellent marketing campaigns, you should be aware that none of them can scan the entire Dark Web, which is an impossible operation. In reality, they will search the most widely available databases (with over 8 billion entries), which often contain outdated data, which is often an accumulation of earlier breach data sets. They are less likely to scan forums that are exclusively accessible to trusted individuals, and they are extremely unlikely to do a deep web scan where peer-to-peer trade occurs. If you intend to utilize them, keep in mind that they will monitor and report on publicly available information, not delete it.

Dark web scanning toolUsually, they will provide a gateway via which to access useful administration tools for customizing scan criteria and reporting. These interfaces are useful, but they only provide insights into what they can, not the entire Dark Web.

When something ends up on an underground site, the alternatives for removing it are restricted. Some dark web dealers may offer to remove it after you pay, but can you really believe them?

The reality is that they will not. After all, you’re dealing with shady people with corrupted values who are only interested in profit and don’t care about the harm they cause.

Do You Need Dark Web Trading Services?

This is invariably where the real action takes place. It’s been lucky to deal with people that have the knowledge and existing networks to provide this service. In this experience, the number of people who can function in this field is little in comparison to generalists.

Russian Dark WebIndividuals with this expertise are typically reformed hackers, police enforcement, military, and intelligence officials. The challenge you have here, and this is the value system, is that if a person has acquired the trust of the inner sanctum of the clandestine trading floors, they have done so through questionable activities. As a result, You find it difficult to think they have the necessary fundamental integrity. You’ve seen examples when trade analysts behave as double agents, switching sides depending on who pays the most, and issues where they function as both a buyer and a seller on the same transaction.

5 Best Practices for Dark Web Monitoring Services

  1. Qualify hardAsk a lot of inquiries to find out if they are a tech company offering an intricate search feature or if they have agents inside these communities. Inquire how many search analysts they have, how many languages are spoken in their team, where the analysts are located, how they obtain insights, have they registered their services with police and intelligence teams, and can they show you examples of forums in which they participate.
  2. TestRun a 90-day trial instead of a complete subscription to assess the integrity of their skills. If they are only conducting an automatic search and not a manual analyst-driven effort, you should consider other possibilities.
  3. Legal protectionMake sure you have legal protection, including protections for improper behavior.
  4. Be smart
    Never provide the agents with sensitive PII to search on, such as your birth date, bank account information, secret keys, and so on. Accept the possibility that anything you share will be sold.
  5. Use Open-Source Intelligence (OSINT) Tools firstNumerous OSINT tools can be used directly to deliver priceless insights. Begin with these, and then examine whether you have any intelligence gaps.

Is Dark Web Monitoring Worth It?

So, to answer the original question, is it worthwhile?

The answer is a qualified yes, provided you have realistic expectations, have examined their capabilities, and do not believe the hype around monitoring services. Dark web monitoring would be part of my cyber program, but it would be low on my priority list. It is preferable to invest in preventative solutions rather than reactive ones like this. You are in a better position to begin if you have established a Zero-Trust security architecture, extended detection and response (XDR), multi-cloud protection, and DLP.

Remember what you learned many years ago when you submitted your first dark web monitoring report at a Board Meeting: “what do you want me to do about this?” ”. You immediately realized that any reporting approach must be accompanied by action; else, the report is just hype.

Before investing significant resources in this service, it is recommended that you focus on making your data set as dark web dull as possible, such as:

  • Delete non-essential data from your network — it astounds me how much needless data we keep, such as outdated resumes that we only used once and no longer require.
  • Challenge data sharing with third parties and ensure they have adequate security in place.
  • Encrypt sensitive data at rest and in transit – personally identifiable information, credit cards, medical information, and so on.
  • Keep credit cards off your network on PCI-compliant servers.
  • Have business systems in place to detect fraudulent behavior as fast as possible, such as offshore transactions on gift cards/credit cards, dormant cards, chargebacks, and so on.
  • Concentrate on effective user awareness education rather than just training. Personally, get down with your business teams and coach them on best practices, as well as build cyber champions in the organization.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in VMwareTagged VMwareLeave a Comment on Is Monitoring the Dark Web Worth It?

Identifying Data Privacy, Residency, and Sovereignty in the Cloud

Posted on July 1, 2022July 26, 2022 by Marbenz Antonio

What is Data Sovereignty & GDPR? [Data Storage Location Laws]

Identifying Digital Sovereignty

Let’s start with data sovereignty and data residency. These two terms are frequently blended and intermingled into a single statement, which might confuse immediately away.

If no jurisdiction is involved, then data residency is the correct phrase because it simply ensures that the data — and the processing of that data — is located in a certain geographical area. However, data sovereignty applies if the data is subject to exclusive legal protections inside a specific jurisdiction of a nation.

Data sovereignty is the ability to preserve legal control and authority over data inside the jurisdictional boundaries of a given nation. This includes data flows and subsequent data processing within the jurisdictional boundaries in question. This includes any new data and metadata generated by the processing of the original data that is subject to the same jurisdictional requirement.

While it may be natural to focus on the data — and the technology that creates it – the concept of data sovereignty encompasses data privacy, human rights, national identity, national security, a nation’s digital competence, the value of data, the data economy, and, ultimately, economic prosperity.

To contain the full scope of a legal entity, the jurisdictional boundary might be expanded across national borders. The European Union (EU), a grouping of political territories, and legal authorities inside the EU, such as the European Commission (EC) and the Court of Justice of the EU, are prominent examples (CJEU). Member nations such as Germany and France, as well as their respective governments and judiciaries, would have their jurisdictions in addition to the EU bodies.

Data sovereignty differs from digital sovereignty. Instead, data sovereignty is a subset of the desire for digital sovereignty. Beyond data, digital sovereignty entails achieving digital autonomy across the complete end-to-end ecosystem and infrastructure, including hardware, software, identities, access, data processing capabilities, data security, and infrastructure cyber resilience.

With this in mind, a sovereign cloud facilitates the goal of digital sovereignty but does not provide data sovereignty or digital sovereignty on its own.

Rather, a sovereign cloud enables an organization to secure data sovereignty on the platform without compromising any of the economic benefits of cloud-at-scale, such as the flexibility, agility, and visibility that enterprises have come to expect from a modern cloud environment.

The landscape of digital and data sovereignty is complex

When we evaluate the present global arena surrounding data privacy and the larger digital and data sovereignty considerations, it is clear that it is a sector that, while fast evolving, still has much interpretation and evolution to occur. There have been and continue to be significant data privacy developments globally, between nations and regions, as well as within countries and regions, and in specific awareness of the significance of increased sovereign protections when it comes to better protecting mission critical and sensitive private and public organization data, as well as the data of citizens and customers that those organizations hold.

Similarly, there is global acknowledgement of the major challenges in the continuing conversations and deliberations in this domain, but there is also recognition of the success that can be had in overcoming these challenges. This success will lead to not only more certainty regarding individual, citizen, public, and corporate data privacy, but also considerable social and economic advantages. These advantages will result not only from securing these important sovereign data assets, but also from guaranteeing that the core data is accessible to sovereign study and analysis, as well as the following great value of the evolving data sets.

Considerations of digital sovereignty appear to be a straightforward discussion at first glance, but hopefully people have demonstrated that it is a topic deeply impacting across a very broad ecosystem of highly interconnected and, at times, contentious and competing matters, and it is a topic worthy of attention and discourse.

Using a sovereign cloud to take control of your digital destiny

The concept of a sovereign cloud is not new, and hopefully there isn’t much misunderstanding about its important roles in an organization’s quest to achieving digital sovereignty. Even with several ambiguities in the legal and compliance realms, as well as continuous uncertainty in the global, national, and regional data privacy landscapes, the importance of a sovereign cloud as part of the road toward digital sovereignty is more than ever.

A sovereign cloud should prioritize one key component: better infrastructure control, allowing both public and private organizations to ensure they are following and implementing the necessary data privacy, security, and compliance measures to protect sensitive and regulated data and application workloads. As previously stated, this infrastructure management goes beyond data, applications, and systems to include controls for data in transit, data workflows, data processing capabilities (such as artificial intelligence and machine learning algorithms), and data access.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in VMwareTagged VMwareLeave a Comment on Identifying Data Privacy, Residency, and Sovereignty in the Cloud

Posts navigation

Older posts

Archives

  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • March 2020
  • December 1969

Categories

  • Agile
  • APMG
  • Business
  • Change Management
  • Cisco
  • Citrix
  • Cloud Software
  • Collaborizza
  • Cybersecurity
  • Development
  • DevOps
  • Generic
  • IBM
  • ITIL 4
  • JavaScript
  • Lean Six Sigma
    • Lean
  • Linux
  • Microsoft
  • Online Training
  • Oracle
  • Partnerships
  • Phyton
  • PRINCE2
  • Professional IT Development
  • Project Management
  • Red Hat
  • Salesforce
  • SAP
  • Scrum
  • Selenium
  • SIP
  • Six Sigma
  • Tableau
  • Technology
  • TOGAF
  • Training Programmes
  • Uncategorized
  • VMware
  • Zero Trust

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

home courses services managed learning about us enquire corporate responsibility privacy disclaimer

Our Clients

Our clients have included prestigious national organisations such as Oxford University Press, multi-national private corporations such as JP Morgan and HSBC, as well as public sector institutions such as the Department of Defence and the Department of Health.

Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
  • Level 14, 380 St Kilda Road, St Kilda, Melbourne, Victoria Australia 3004
  • Level 4, 45 Queen Street, Auckland, 1010, New Zealand
  • International House. 142 Cromwell Road, London SW7 4EF. United Kingdom
  • Rooms 1318-20 Hollywood Plaza. 610 Nathan Road. Mongkok Kowloon, Hong Kong
  • © 2020 CourseMonster®
Log In Register Reset your possword
Lost Password?
Already have an account? Log In
Please enter your username or email address. You will receive a link to create a new password via email.
If you do not receive this email, please check your spam folder or contact us for assistance.