• Courses
    • Oracle
    • Red Hat
    • IBM
    • ITIL
    • PRINCE2
    • Six Sigma
    • Microsoft
    • TOGAF
    • Agile
    • Linux
    • All Brands
  • Services
    • Vendor Managed Learning
    • Onsite Training
    • Training Subscription
  • Managed Learning
  • About Us
    • Contact Us
    • Our Team
    • FAQ
  • Enquire

OUR BLOG


Author: Marbenz Antonio

NVIDIA GPUs are powering the next generation of trustworthy AI in a secure cloud

Posted on May 23, 2022 by Marbenz Antonio

Everything You Need to Know About Nvidia's New AI GPUs - SDxCentral

By democratizing access to scalable computation, storage, and networking infrastructure and services, cloud computing is enabling a new era of data and AI. Organizations can now collect data on an unprecedented scale and utilize it to train complicated models and produce insights thanks to the cloud.

While the increased need for data has opened up new opportunities, it has also raised privacy and security issues, particularly in regulated areas such as government, banking, and healthcare. Patient records, which are used to train models to assist physicians in diagnosis, are one area where data privacy is critical. Another example is in banking, where models used to assess borrower creditworthiness are being developed using more comprehensive information such as bank records, tax filings, and even social media profiles. To guarantee that this data remains private, governments and regulatory organizations are enacting strict privacy rules and regulations to control the use and sharing of data for AI, such as the General Data Protection Regulation (GDPR) and the planned EU AI Act.

Commitment to a confidential cloud

Microsoft acknowledges that trustworthy AI necessitates a trustworthy cloud—one with built-in security, privacy, and transparency. Confidential computing is a critical component of this vision—a collection of hardware and software capabilities that provide data owners with technical and provable control over how their data is shared and utilized. Confidential computing is based on trusted execution environments, a novel hardware abstraction (TEEs). Data in TEEs is encrypted not just at rest or in transit, but also while in use. TEEs also provide remote attestation, which allows data owners to remotely validate the setup of the TEE’s hardware and software and authorize specified algorithms access to their data.

Microsoft is dedicated to creating a secure cloud, where secure computing is the default for all cloud services. Azure now provides a rich confidential computing platform that includes various types of confidential computing hardware (Intel SGX, AMD SEV-SNP), core confidential computing services like Azure Attestation and Azure Key Vault managed HSM, and application-level services like Azure SQL Always Encrypted, Azure confidential ledger, and confidential containers on Azure. However, these options are restricted to the use of CPUs. This presents a problem for AI workloads, which rely significantly on AI accelerators such as GPUs to deliver the speed required to handle enormous volumes of data and train complicated models.

The Microsoft Research Confidential Computing group identified this problem and proposed a vision for confidential AI-powered by confidential GPUs in two papers, “Oblivious Multi-Party Machine Learning on Trusted Processors” and “Graviton: Trusted Execution Environments on GPUs,” which we share in this post. We also go through the NVIDIA GPU technology that is assisting us in realizing this goal, as well as the partnership between NVIDIA, Microsoft Research, and Azure that allowed NVIDIA GPUs to become a part of the Azure confidential computing ecosystem.

Vision for confidential GPUs

TEEs may now be created using CPUs from companies like Intel and AMD, which can isolate a process or an entire guest virtual machine (VM), essentially removing the host operating system and the hypervisor from the trust barrier. The goal is to extend this trust barrier to GPUs, allowing CPU TEE programs to safely offload computation and data to GPUs.

Diagram showing the trust boundary extended from the host trusted execution environment of the CPU to the trusted execution environment of the GPU through a secure channel.

Unfortunately, expanding the trust boundary is not simple. On the one hand, we must protect against a variety of attacks, such as man-in-the-middle attacks, in which the attacker can observe or tamper with traffic on the PCIe bus or an NVIDIA NVLink connecting multiple GPUs, and impersonation attacks, in which the host assigns an incorrectly configured GPU, a GPU running older versions or malicious firmware, or one lacking confidential computing support for the guest VM. Meanwhile, it must guarantee that the Azure host operating system has sufficient authority over the GPU to carry out administrative operations. Furthermore, the additional security must not impose major performance overheads, raise thermal design power, or need significant modifications to the GPU microarchitecture.

According to the research, this vision may be accomplished by equipping the GPU with the following capabilities:

  • A new model in which all important GPU state, including GPU memory, is isolated from the host.
  • A hardware root-of-trust on the GPU chip that can provide verifiable attestations capturing the GPU’s entire security-sensitive state, including all firmware and microcode.
  • GPU driver extensions for verifying GPU attestations, establishing a secure communication channel with the GPU, and transparently encrypting all communications between the CPU and GPU.
  • All GPU-GPU connections over NVLink have transparently encrypted thanks to hardware support.
  • Support for securely attaching GPUs to a CPU TEE in the guest operating system and hypervisor, even if the contents of the CPU TEE are encrypted.

Confidential computing with NVIDIA A100 Tensor Core GPUs

With a new technology called Ampere Protected Memory (APM) in the NVIDIA A100 Tensor Core GPUs, NVIDIA and Azure has made a critical step toward fulfilling this ambition. We detail how APM enables secret computing within the A100 GPU to ensure end-to-end data privacy in this section.

APM introduces a new confidential mode of execution in the A100 GPU, which marks a region in high-bandwidth memory (HBM) as protected and helps prevent leaks through memory-mapped I/O (MMIO) access into this region from the host and peer GPUs. Only authenticated and encrypted transmission to and from the area is authorized.

The GPU can be paired with any external entity, such as a TEE on the host CPU, in confidential mode. The GPU features a hardware root-of-trust to enable this pairing (HRoT). NVIDIA assigns the HRoT a unique identification and a related certificate throughout the manufacturing process. The HRoT additionally provides authenticated and measured boot by measuring the GPU’s firmware as well as the firmware of other microcontrollers on the GPU, including a security microcontroller known as SEC2. In turn, SEC2 can issue attestation reports that incorporate these measurements and are signed by a new attestation key that is validated by the unique device key. Any external party can use these reports to confirm that the GPU is in confidential mode and running the latest known good firmware.

The NVIDIA GPU driver in the CPU TEE checks if the GPU is in confidential mode when it boots. If this is the case, the driver requests an attestation report and verifies that the GPU is a genuine NVIDIA GPU with known good firmware. Once authenticated, the driver opens a secure channel with the SEC2 microcontroller on the GPU, employing the SPDM-backed Diffie-Hellman-based key exchange protocol to generate a new session key. When that exchange is finished, the GPU driver and SEC2 both have the same symmetric session key.

The GPU driver encrypts all subsequent data transfers to and from the GPU using the shared session key. Because CPU TEE pages are encrypted in memory and therefore unreadable by GPU DMA engines, the GPU driver creates pages outside the CPU TEE and writes encrypted data to those pages. The SEC2 microcontroller on the GPU is in charge of decrypting the encrypted data sent from the CPU and transferring it to the protected zone. Once the data is in cleartext in high bandwidth memory (HBM), GPU kernels can freely use it for computation.

Diagram showing how the GPU driver on the host CPU and the SEC2 microcontroller on the NVIDIA Ampere GPU work together to achieve end-to-end encryption of data transfers.

Accelerating innovation with confidential AI

The introduction of APM is a significant step toward attaining greater use of secure AI in the cloud and beyond. APM is the fundamental building piece of Azure Confidential GPU VMs, which are currently in private preview. These VMs, developed in conjunction with NVIDIA, Azure, and Microsoft Research, have up to four A100 GPUs with 80 GB of HBM and APM technology, allowing customers to run AI workloads on Azure with increased security.

However, this is only the beginning. We are excited to take our partnership with NVIDIA to the next level with NVIDIA’s Hopper architecture, which will let clients ensure the confidentiality and integrity of data and AI models in use. We think that secret GPUs can allow a secure AI platform on which many businesses may collaborate to train and deploy AI models by pooling sensitive information while maintaining complete control over their data and models. A platform like this can unleash the value of enormous volumes of data while protecting data privacy, allowing enterprises to drive innovation.

Bosch Research, the company’s research, and advanced engineering branch is building an AI pipeline to train models for autonomous driving. Personal identifying information (PII), such as license plate numbers and people’s faces, is common in most of the data it employs. Simultaneously, it must comply with GDPR, which needs a legal justification for processing PII, namely data subjects’ consent or legitimate interest. The former is difficult since it is nearly hard to obtain consent from pedestrians and drivers captured by test cars. Relying on real interest is also difficult since it involves demonstrating, among other things, that there is a less invasive manner of attaining the same purpose. This is where confidential AI shines: Using private computing may help decrease risks for data subjects and data controllers by restricting data exposure (for example, to specified algorithms), allowing businesses to train more accurate models.

Microsoft Research is dedicated to collaborating with the secret computing ecosystem, including collaborators such as NVIDIA and Bosch Research, to increase security, allow smooth training and deployment of confidential AI models, and power the next generation of technology.

Posted in MicrosoftTagged MicrosoftLeave a Comment on NVIDIA GPUs are powering the next generation of trustworthy AI in a secure cloud

Azure AI can help businesses to offer customers in over 100 languages

Posted on May 23, 2022 by Marbenz Antonio

What is Artificial Intelligence? How Does AI Work? (2022)- Great Learning

Microsoft announced today the addition of 12 new languages and dialects to Translator. With these upgrades, the service can now translate between more than 100 languages and dialects, making text and document information available to 5.66 billion people globally.

“One hundred languages is a good milestone for us to realize the goal of allowing everyone to interact regardless of language,” said Xuedong Huang, Microsoft technical fellow and Azure AI chief technology officer.

Today, translators cover the most commonly spoken languages in the world, including English, Chinese, Hindi, Arabic, and Spanish. Advances in AI technology have enabled the business to expand its language library with low-resource and endangered languages, such as Inuktitut, a dialect of Inuktut spoken by around 40,000 Inuit in Canada.

Bashkir, Dhivehi, Georgian, Kyrgyz, Macedonian, Mongolian (Cyrillic), Mongolian (Traditional), Tatar, Tibetan, Turkmen, Uyghur, and Uzbek (Latin) are the new languages and dialects that have taken Translator beyond the 100-language mark.

Removing language barrier

Thousands of organizations throughout the world have used Translator to connect with their members, workers, and clients. For example, the Volkswagen Group uses machine translation technology to service its users in over 60 languages. Each year, the effort includes translating more than 1 billion words. The firm began with conventional Translator models and is now fine-tuning these models with industry-specific terminology utilizing the Translator’s custom functionality.

According to Huang, the ability for enterprises to fine-tune pre-trained AI models to their unique needs was central to Microsoft’s ambition when it released Azure Cognitive Services in 2015.

Azure Cognitive Services, in addition to language, contain AI models for voice, vision, and decision-making activities. These models enable enterprises to use capabilities such as Optical Character Recognition, a Computer Vision technology (OCR). This service retrieves text input on a form in any of the Translator’s more than 100 languages and utilizes it to create a database.

“We celebrate not only what we have accomplished in translation – reaching 100 languages – but also in voice and OCR,” Huang remarked. “We aim to break through language boundaries.”

Multilingual model

According to Huang, the cutting edge of machine translation technology at Microsoft is a multilingual AI model dubbed Z-code. The model blends various languages from the same linguistic family, such as Hindi, Marathi, and Gujarati from India. As a result, the different language models learn from one another, reducing the amount of data required to create high-quality translations. For example, when the translation model is trained with related French, Portuguese, Spanish, and Italian data, the quality of translations to and from Romanian improves.

“We can exploit the shared transfer learning potential and apply it to develop the whole language family,” Huang added.

The lowered data requirements also allow the Translator team to create models for languages with limited resources or that are endangered owing to declining native speaker numbers. Several of the languages supported by translators have limited resources or are endangered.

Huang stated that Z-code is part of a bigger effort to merge AI models for text, vision, audio, and language in order to create AI systems that can talk, see, hear, and comprehend, and hence complement human skills more effectively. The continuous release of additional languages produced using multilingual model training technology, he claims, is proof of this so-called XYZ-code vision coming into focus.

“It’s bringing people together,” Huang added. “Because of our XYZ-code vision, this capacity is now in production.”

Posted in MicrosoftTagged MicrosoftLeave a Comment on Azure AI can help businesses to offer customers in over 100 languages

Data is taking us in new directions: How digitalization is changing city mobility

Posted on May 23, 2022 by Marbenz Antonio

Technologies Driving the Future of Smart Mobility – OpenGov Asia

The ease and speed with which we may move around a city have a huge effect on individuals who live and work there, affecting our livelihoods, wellness, and quality of life. Mobility is critical to the efficient operation of cities.

According to Heiko Huettel, Microsoft EMEA’s automotive head, innovation enabled by a mix of digital technology and collaboration between local governments, transportation providers, and mobility firms will help offer the answers required. Mobility is evolving into a smart platform, fueled by connection, real-time data, and artificial intelligence.

Cities are growing in size and in importance

According to the latest TomTom Traffic Index, traffic has been at all-time lows in many places around the world since the beginning of the pandemic, with one-fifth (19%) less congestion globally and congestion down by an average of 26% during rush hour (Europe: 24%; North America: 40%; Asia 11%). However, the roads will not remain calm for long. TomTom’s vice president of traffic and travel, Ralf-Peter Schäfer, believes it will recover.

“That is why now is the moment for city planners, policymakers, businesses – and drivers – to assess what they will do to make future roads less crowded,” he says.

According to UN figures, more than half (54%) of the world’s population already lives in cities, with that figure expected to climb to 70% by 2030.

Economic potential is a big draw for cities. New York City contains more wealth than the whole country of Canada. London has the same GDP as the Netherlands, and if Tokyo were a country in its own right, it would be the world’s 15th largest economy.

Cities have typically become more economically productive due to the concentration of enterprises, talent, and other resources. London, for example, has the greatest productivity in the United Kingdom, more than 50% more than many of the country’s former industrial heartlands, according to the Institute for Public Policy Research.

Cities are facing mounting challenges as mobility hits the buffer

According to Open Data Institute research, some cities are no longer benefiting from scale-related productivity increases. The reason: strain on public transportation infrastructure. In terms of economic production, Birmingham, the United Kingdom’s second-biggest city, acts as a city with a third of its population during peak travel hours. It is far from an outlier.

Not only are expanding populations putting a burden on mobility, but e-commerce has roughly tripled in the previous five years (even before adjusting for the pandemic), resulting in a spike of congestion induced by last-mile distribution.

As a result, there is increasing pressure for mayors and municipal authorities to take action to decrease traffic, reduce pollution, reduce carbon emissions, and assist in urban redevelopment to protect the health and economics of their cities.

As part of its aspirations to become carbon neutral, Copenhagen is investing in a variety of micro-mobility programs, aiming to employ this form of transportation for more than half of all journeys by 2025.

Mobility needs are evolving, are cities ready?

Regulatory and policy demands, such as the need to cut carbon emissions and improve air quality, adapt to changes in people’s wants and behaviors, increase in e-commerce delivery, a rise in remote working, and other factors, are increasing the complexity.

During the pandemic, lockdowns served as a behavioral deterrent. “circuit breakers,” adds Huettel, forcing many individuals to rethink their mobility demands and options.

With flexible, “hybrid” employment on the rise (76% of European organizations currently have remote work rules), many office workers will likely commute less frequently or during off-peak hours. And this is intensifying pre-crisis tendencies like the transition away from private automobile ownership and fixed-schedule public transportation and toward new mobility alternatives like micro-mobility, subscription- and shared-transport services, he argues.

Huettel adds that it will take time for the effects of these changes to be realized and for us to identify between disruption induced by the pandemic and longer-term adjustments in mobility. However, one pre-crisis trend remains the demand for improved methods for moving people and things throughout cities.

Routing mobility through the cloud

Dr. Matthias Kempf, a founding partner of automotive consultancy Berylls Strategy Advisors, emphasizes the significance of acting quickly to fulfill increasing demands.

“We predict that cities and present mobility networks throughout the world will be unable to satisfy expanding demand, resulting in a travel capacity deficit of 150–200 billion person kilometers in the next 15 years,” he adds. “That’s the equivalent of 25,000 trips to the moon and back.”

He also claims that cities are designed largely for vehicles rather than people.

“Mobility must be designed with people in mind,” he argues. “Because people dislike changing ‘vessels,’ we want a quick, pleasant, and easy integrated transit system that encourages people to utilize the most efficient means of transportation.” What is very crucial in this case is the availability of high-quality mobility data that allows mobility systems to be planned and operated as needed.”

Importance of mobility data in making cities smarter - Geospatial World

However, innovation is changing this. While cars will continue to be a major mode of transportation for the foreseeable future, they are being reimagined “as-a-service” and consumed as a bundle of kilometers or minutes by companies like Volkswagen’s daughter company Urban Mobility International (UMI), which has launched an electric vehicle car-sharing service called WeShare. The idea is to minimize the number of automobiles on city roadways while making better use of those that do remain. Customers use their smartphones to hire a car, which they then return to any location within the operational area. The company uses AI technology to analyze demand and parking behavior trends, allowing it to enhance its service and save operational expenses.

Philipp Reth Philip Reth

“Our concept of how we use and experience transport is transforming,” says Philipp Reth, CEO of UMI, who says the current generation dubbed “Generation Share” is choosing to rent on demand and pay as you go.

“It’s cost-effective, convenient, and environmentally savvy,” Reth says. “It means car ownership is becoming less and less relevant for urban mobility – it’s increasingly more about a multi-modal mix of options that are interlinked in a meaningful and intelligent way.”

As cities invest in fast wireless connectivity and the cloud to create new digital highways, “digital vehicles” – vehicles that are as much made of software as they are of metal and rubber and able to intelligently interact with and navigate the outside world – will be key to realizing the potential of smart mobility, says Christoph Hartung, CEO of Bosch’s software subsidiary, ETAS.

He claims that “cars are soon becoming linked components in a connected mobility system.”

How digital transformation is driving economic change

Bosch has announced a collaboration with Microsoft to create a software platform that would easily link automobiles to the cloud. The purpose of this partnership is to simplify and speed the development and deployment of vehicle software in compliance with automotive quality standards throughout a car’s lifespan. The new platform, which will be built on Microsoft Azure and include Bosch software components, will allow the software to be produced and distributed to control units and vehicle computers. The partnership will also focus on the creation of technologies to improve the efficiency of the software development process.

Christoph HartungChristoph Hartung

“As cars get smarter, they open up a limitless number of options for innovation,” adds Hartung. “Having access to open, standardized platforms lower the ‘barrier to entry and shortens the time to market for new mobility services and operators.” This will assist customers by providing more options and will allow us to become wiser about how things travel across cities in linked vans and trucks.”

According to Hartung, the ecosystems to which cars are connected must be open and standardized, or else solutions would be fragmented and adoption will be blocked.

“People are traveling not only inside London, Berlin, or Paris, but from Paris to Berlin, and they want a seamless experience,” he adds.

Shashi Verma, Chief Technology Officer at Transport for London (TfL), spends a lot of time thinking about how to encourage healthy customer behaviors and choices through experiences.

“We want to guide people to the most efficient means of transportation, which is often public transportation or active modes like walking and cycling,” he explains. “However, finding out how the public transportation system works in a new city may be difficult, so many people just go and join the cab waiting.”

“That is something we do not want to happen.” We’re focused on making things easy for customers in London, such as open payments and making the experience of paying for transportation similar to buying coffee or anything else. People do utilize it, and we’ve seen big gains from bringing the right sort of ease to the right kind of product.”

Shashi Verma

TfL’s goal, according to Verma, is not only to administer London’s transportation system but also to boost the city’s productivity. To keep the city running, a constant focus on balancing capacity and demand is required. TfL requires a variety of Microsoft tools and services to run its daily operations. For example, using Microsoft Teams to conduct remote inspections and engineering design assessments allows new infrastructure to be guaranteed and brought into service during lockdowns without the requirement for on-site presence. TfL also utilizes Azure, Microsoft’s secure cloud hosting environment, to host a variety of apps and data, including Power BI to help make data on recent passenger journeys and road collisions more accessible to the public.

“The most important use of technology is to ensure that capacity is being used properly and that we, as operators, are getting the most out of that available capacity,” Varma adds. “Digital signaling, for example, has increased the capacity of the Victoria line by over 40%.” It is difficult and expensive to establish a new train line, but if you can achieve the same by improving the technology within existing rail lines, it is a significant benefit to any city.”

Smarter mobility, according to Verma, comes down to “better use of data to lead people to make the correct decisions, and better use of technology to drive those choices and strengthen infrastructure.”

Many parties believe that public-private collaboration on open data and data sharing is important to the development of innovative mobility solutions.

Posted in MicrosoftTagged MicrosoftLeave a Comment on Data is taking us in new directions: How digitalization is changing city mobility

How technology is assisting pharma equipment producers to stay the best in class?

Posted on May 23, 2022 by Marbenz Antonio

The Day - Pfizer readying possible coronavirus treatment - News from  southeastern Connecticut

We have witnessed how adversity can lead to creativity as the globe continues to face multiple obstacles as a result of the COVID-19 epidemic. We went from early vaccination studies and testing to providing real doses to vulnerable people all around the world in less than a year. More than 560 million doses have been administered — and counting. This makes it the world’s largest organized vaccination program.

Along with the scientific achievements that have gotten us to this point, vaccination deployment requires a massive logistical effort. One critical component in this chain is the quick manufacturing of enough glass vials to hold the vaccinations and syringes to administer them. This must be accomplished without sacrificing quality, which is a non-negotiable when dealing with something as important and sensitive as a vaccine. Stevanato Group, an Italian corporation, is looking for new technologies to help deliver on both.

Positive pressure from the pandemic

The company, which has offices in the United States, Mexico, and Japan, got involved in the pandemic response early on. They produce plastic components used in virus detection kits and build essential vaccine inspection equipment used by several global pharmaceutical companies to ensure the integrity of their vaccines in addition to delivering glass vials and syringes to more than 70% of the global treatment and vaccine programs in the most advanced development stages.

Given that the company’s goods are used at nearly every stage of the process, from testing to immunization, ensuring business continuity while keeping on-site staff safe has been a top goal. Collaboration solutions like Microsoft 365 and Microsoft Teams have played an important role in minimizing interruption and connecting colleagues and customers as much as possible.

“We all recognize the very real impact that our job has on the globe,” says Raffaele Pace, Stevanato Group’s Engineering Vice President of Operations. “For example, if we deliver vaccine inspection equipment to our clients late, it has an impact on vaccine supply chains.” That type of positive pressure is advantageous because it gives you the impression that you can make a difference.”

Necessity is the mother of innovation

The pandemic also prompted the corporation to switch to remote testing of the inspection equipment it sells to pharmaceutical companies all around the world.

The factory acceptance test (FAT) is the final pre-delivery test that ensures the customer’s equipment is operational and allows the firm to fix any remaining difficulties. Before the final delivery, it is usually done in person at Stevanato Group’s facilities in Italy or Denmark, but as the pandemic swept through Italy and the rest of Europe in early 2020, it became clear that this way of working was no longer viable, although the need for quality inspection equipment was more pressing than ever. To address this issue, Stevanato Group began providing clients the option of remotely attending factory acceptance testing utilizing mixed reality technology for all of its equipment portfolios, including glass converting lines and assembly and packaging solutions.

Members of the company’s engineering teams wear a Microsoft HoloLens 2 headset and walk each customer through their equipment inspection process, demonstrating the general machine components, demonstrating how to run it, and sharing important documentation in real-time – all while the customer may be on the other side of the world.

“Some clients were apprehensive at first, but after they saw the technology in action, they saw the benefits extended beyond the pandemic, particularly in terms of cost and time savings when you eliminate the need to travel,” Pace explains.

This breakthrough has had a huge impact – not only are all FATs for Stevanato Group’s vision inspection devices now done entirely digitally, but the technology has also shown its worth early in the inspection process. Because conventional, in-person FATs are one of the final processes before equipment delivery, misunderstandings or difficulties that must be handled on a very short timeline may develop. Customers may now “view” the equipment for themselves an unlimited number of times.

The team has also started to provide remote audits and mixed reality meetings to incorporate clients earlier in the manufacturing process. For example, by using HoloLens 2 for early design review meetings, they can clarify the process, address any concerns, and provide the customer with a real-time sense of how their equipment is evolving – all of which reduces the likelihood of delays and issues affecting the overall production of medicines.

Posted in MicrosoftTagged MicrosoftLeave a Comment on How technology is assisting pharma equipment producers to stay the best in class?

With an enterprise browser, you can close IT gaps in your firm

Posted on May 23, 2022 by Marbenz Antonio

How “Minding the Gap” Can Improve Your Career and Happiness | Elegant  Themes Blog

Web standards and browsers have done an excellent job of establishing a unified platform for web and SaaS app delivery. The browser has evolved into the key interface via which people do tasks.

However, there are other holes that modern businesses must fill, including:

  • Access to eternal web apps: Today, this frequently necessitates the use of a VPN, which might pose a security risk.
  • Single Sign-On (SSO): Users require an SSO solution to minimize logins, and IT must enforce multi-factor authentication (MFA).
  • Data Loss Prevention (DLP): Copy/paste, downloads, printing, and screen capture are not controlled by standard browsers, which might pose a danger for crucial software, particularly on BYOD devices.
  • Keylogger Protection: Keystroke loggers may be put on devices, allowing malicious hackers to steal employee credentials or valuable company data.
  • Malware and Ransomware Protection: When users open links in a conventional browser, they may accidentally install malware that can lead to ransomware attacks.
  • Phishing Protection: Users can click links that will take them to phishing sites that will steal their data.
  • Analytics: Analytics can evaluate security risk, performance, or app consumption that are not available in standard browsers.
  • Browsing: Standard browsers do not regulate or prevent all potentially dangerous URLs, nor do they follow company rules.

Organizations can fill some of the gaps with a combination of third-party solutions such as extra clients, cloud brokers, and managed PCs (which do not work for BYOD use cases). A business browser can provide an excellent solution for security, performance, and user productivity by allowing IT to address these concerns in the browser that accesses and shows the apps.

Citrix Workspace Browser is a Chromium-based corporate browser integrated with the Citrix Workspace app and part of the Citrix Secure Private Access solution, which allows zero trust and VPN-free access to online and SaaS apps. It fills the gaps above and enforces regulations at the best point in the flow – the browser. Adaptive authentication, which is also available with Citrix Secure Private Access, may analyze the client status, providing suitable contextual controls for both BYOD devices and controlled PCs.

Web and SaaS apps can start in the local containerized Citrix Workspace Browser to optimize cost and performance, or they can launch and operate in the Citrix Secure Browser service, the Citrix Cloud-hosted remote browser isolation (RBI) solution, depending on the policy. Citrix is now able to provide a full solution for the organization to safely complete tasks.

Posted in CitrixTagged CitrixLeave a Comment on With an enterprise browser, you can close IT gaps in your firm

What factors to consider when selecting the best thin-client endpoint?

Posted on May 23, 2022 by Marbenz Antonio

Settle the thin client vs. PC debate for VDI endpoints

The endpoint is a critical component of every Citrix project. The endpoint, like the underlying computing and storage infrastructure, is an important component of the overall system and may influence the strength of security, simplicity of maintenance, and quality of the user experience.

Let’s take a look at some things to think about as you work toward your ideal thin-client endpoint.

A Secure, Linux-based Operating System

A Linux-based operating system is appropriate for gaining access to Citrix Workspace. Linux-based endpoints need little resources, are incredibly secure, offer easy maintenance tools and plenty of driver support, and can result in significant cost savings. These Linux-based operating systems might differ from one manufacturer to the next. You should evaluate the security of the operating system in question, and it should be read-only and have just a few software packages running that work independently. Secure communication between the various components of the thin-client endpoint solution is also required (device, client, server, gateway, etc.).

Central Management from Anywhere

Citrix has enabled you to centralize user workspace management, and so should the endpoint solution. You should be able to change the setup from anywhere. Many endpoint solutions now include a gateway to securely control and monitor those endpoints, as many users work from home or in hybrid environments. The administration tool should allow you to configure the Citrix Workspace client in every way conceivable. Some solutions will even allow you to make configurations that are normally done through the command line in a simple GUI. These solutions should also give visibility monitoring and reporting.

Citrix Workspace App Releases Should Be Followed by Endpoint Vendors

Some suppliers are faster than others in upgrading their operating systems to support the most recent Citrix Workspace app versions. Staying current guarantees that you have the most up-to-date features, which is especially critical if the upgrades impact security.

Simple System, App Updates

When a program, such as Citrix Workspace, is upgraded, certain vendor systems demand a whole OS update. Others can only adjust that component, making improvements easier to make. Choose a program that allows you to schedule app or OS updates and can automate the entire process. Some tools distribute updates from device to device rather than to all devices at once, which uses less network capacity.

All Peripheries Must Be Supported

Customers frequently select thin-client endpoints that are incapable of handling changing needs, particularly when it comes to endpoint-connected devices such as printers, scanners, and signature pads. Consider the present or future peripherals you will want to connect to your endpoint and include them in your testing of any solution, especially if you are transitioning to a Linux-based OS; you should also use the thin-client vendor’s support and expertise to guarantee peripherals function.

Plan Your Hardware Purchases for the Next 10 Years

The refresh cycle for the thin client you select should be at least twice as long as a standard desktop cycle. Many clients are astonished to learn that thin client pricing is sometimes greater than PC prices. However, in addition to price, you must consider their durability. When using fanless, small PC technology, energy consumption may be reduced to 10% of that of a regular PC. Don’t settle for the cheapest hardware. Saving $100 to $200 today may come back to haunt you later. Many customers have chosen a less expensive choice only to discover that it does not meet later requirements such as additional monitor support, additional apps, a higher screen refresh rate, new peripheral devices, and so on.

Repurpose Your PCs (Old and New)

Most thin-client suppliers also allow you to reuse PCs. You might migrate your PCs from their present operating system to a Linux-based operating system and reap the benefits of a thin-client endpoint solution. In the case of existing PCs in your firm, you may extend their life and save money. You can even select your hardware vendor and reuse such devices utilizing thin-client vendor technologies. In any case, the thin-client provider should be able to provide advice on optimal hardware specifications as well as performance expectations.

A Licensing Model That Meets Your Requirements

Because vendor licensing for thin-client endpoint solutions varies, you must locate a vendor who meets your technical requirements while also meeting your licensing requirements. Some suppliers charge a fee for both the operating system and the administration tool. Some companies charge for certain aspects of the management tool. Some provide subscription licensing with varying levels of user count flexibility, while others employ a perpetual license model. When it comes to reusing endpoints, some manufacturers use MAC addresses to license them, while others use a more flexible central-licensing technique. Depending on the manufacturer, licensing might also be per concurrent or per identified device. There are several possibilities, so make sure you have these insights while making your decision.

Support from the Vendor

When you choose a thin-client endpoint solution, you are entering into a partnership rather than merely purchasing new hardware and software. You should be aware that end-user computing is not always simple, and problems do arise. Your vendor should be attentive and give the assistance you require. That is why you advocate including a PoC in the screening process. The majority of thin-client endpoint initiatives need back-and-forth communication between the client/partner and the provider. It is critical to success. Proper proof of concept should help you succeed in the long run. After all, you’ll most likely have the thin-client endpoint solution in place for at least ten years. You should get the impression that the vendor is invested in your success and will provide you with the assistance you require as requirements alter.

Posted in CitrixTagged CitrixLeave a Comment on What factors to consider when selecting the best thin-client endpoint?

Web 3.0 and the Metaverse are quickly approaching- these are all the Digital Skills you’ll need to get Successful

Posted on May 23, 2022 by Marbenz Antonio

Web 3.0, the Metaverse, and Employee Training | EdgePoint Learning

People will control Web 3.0, with a shift away from technology companies gaining from the product (that’s you and me) and toward a movement that uses new technology and the power of blockchain to protect ‘real life’ identification. It will be able to pay for goods and services and make comments while protecting privacy. The way we develop digital services, build client connections, and publish content is set to change.

The call for decentralization has started

The Digital Markets Act started calling out gatekeeper corporations like Google, Apple, and Amazon last week for their approach to personal data monetization and maybe the shady approach to securing customers’ data. Facebook, whose business model is focused on selling targeted advertising based on personal data, was renamed Meta and has shifted its attention to virtual real estate. Many more little acts of decentralization, such as the open-source movement, are taking place, all of which will have consequences on businesses worldwide that need to navigate new waters of opportunity.

A lack of digital skills has top every skills shortage report we’ve heard over the last five years, but the problem is becoming more significant this year as technology moves towards automation, AI, and the Metaverse. According to a PwC report released this week, 85% of UK businesses are currently experiencing a critical digital skills shortage, followed by a shortage of core business skills such as teamwork, leadership, relationship-building, and communication skills, as well as softer skills such as empathy, resilience, and agility. Businesses need T-shaped individuals with deep experience and a broad knowledge base.

What are digital skills for Web 3.0?

You will be in charge of a multi-disciplined team of 10 digital superheroes. We excel at creating content such as videos, blogs, podcasts, security, data structure, SEO, UX, and CMS maintenance… However, with Web 3.0 and the Metaverse on the horizon, we’ll need to hone our digital talents to prepare for the future of marketing. Jeff Bullas, a top online marketing influencer, has detailed four important digital marketing skills marketers need to prepare for Web 3.0 and the Metaverse. They are as follows:

  1. Semantic Content Marketing – content that is created specifically to the user’s needs
  2. Advanced UI (UX) – creating frictionless experiences
  3. Immersive Marketing – creating more engaging and immersive content
  4. NFT’s aka Non-Fungible Tokens – With players being able to sell their virtual assets, picture being Meta, and having a virtual shop in an exclusive mall with millions of visitors.

According to Forbes, the following are the top Web 3.0 skills that are useful across a business, not simply marketing:

  1. AI writing assistants
  2. Blockchain for the creative economy
  3. Creative commons for the creative economy
  4. Data democracy for the creative economy

How can I identify the Digital Skills I need?

If you want to build a digital team or figure out what skills your workforce need, the SFIA Framework is an excellent place to start. SFIA is the global standard for digital skills and proficiency, with explicit definitions of digital tasks like acceptance testing, content writing, data visualization, marketing, incident management, and numerical analysis.

If you’re looking for job inspiration, the Government Digital Services has identified the skills needed for Digital, Software Engineering, and Cybersecurity professionals here: Digital, data, and technology (DDaT) positions.

SFIA’s skilled staff is mapping APMG certificates to their framework of abilities and competencies. We hope this makes it easy to locate the credentials that have the digital skills you want. If you want to create XD capacity, for example, our Experience Design (XD) Practitioner Course relates to the following skills: User Experience Design, User Experience Evaluation, and Business Modeling Knowledge.

Certification and Digital Skill

Agile and Change Management are still highly valued skill sets, but we’ve just introduced supplier certifications that reference open-source content. There is also a desire for problem-solving, creative, and communication abilities, all of which will be in high demand as organizations mature digitally.

At APMG, we have a wide range of certifications available, including AI, Agile, Change, Data Analytics, and Lean, to assist you in developing digital abilities. Here’s an overview of some of the certifications that will put you on the right track for preparing for Web 3.0 and the Metaverse.

Improve product and service design

Our Design Thinking (DTMethod) accreditation has just arrived. This well-thought-out methodology, launched in collaboration with Inprogress Design Lab, introduces popular design thinking frameworks such as the Design Council Double Diamond and neatly puts it all together into a unique method to provide organizations with a structured approach to help solve problems, deliver value, and eliminate waste.

Professionals in AgilePM, AgileDS, and AgileBA will be familiar with some of the methods, such as human-centered design (HCD), Minimum Viable Product (MPV), and creating user stories and prototypes, but will find the tools during the exploration stage useful for collaborating with the larger business. The Challenge Tree, for example, which investigates causes and effects, is a useful exercise for understanding stakeholder requirements before starting a prototype.

DTMethod also refers to several agile concepts and promotes organizations to create a culture that fosters innovation, such as risk-taking and removing the fear of failure via exploratory and experimental activities.

This is a strong method for agile organizations looking to foster creativity and problem solving inside their organization using an evidence-based approach. Interviewing and organized brainstorming are two examples of skills you will learn.

Design services, not websites

A good digital service should be like a good restaurant service in that it is frictionless, you don’t notice it, and it predicts what you need before you realize it. Web 3.0 is all about open-source, community-driven, and collaboration, and the Government Digital Services (GDS) provides all of that and more by building a huge bank of resources for you.

GDS has just launched a Design System that is full of useful suggestions as well as components and patterns that you may use on your website. For example, if you need to ask your customers to confirm their email addresses, GDS contains all of the patterns you need to consider, saving you time scratching your head trying to figure out what happens after each scenario, such as assuming customers can access their email account, emails going to a spam account, and countless other situations.

Some logics may help you branch out answers to inquiries, saving you time on the phone or email.

The Agile Digital Services (AgileDS) certification is based on the GOV.UK document, which was created by Government Digital Services (GDS) to assist organizations in developing a consistent approach to planning, delivering, and upgrading their digital services.

Agile Digital Services (AgileDS) was developed by APMG and the Agile Business Consortium to assist organizations in developing a consistent approach to designing, delivering, and upgrading digital services.

Time is a currency of experiences

Web 3.0 is a natural progression from publishing engaging user-generated content such as images, videos, testimonials, and podcasts to immersive experiences enabled by new technologies such as live music concerts steamed into games and, of course, Augmented Reality (it will be huge as soon as they figure out how to stop motion sickness!).

Great immersive experiences allow the user to uncover the tale for himself, but unless their travels are branching, they may end up at a dead end. –

When you combine XD As a practitioner with the DTMethod, you’ll learn how to create online and offline seamless trips.

In addition, the Experience Design (XD) Practitioner Course will teach you how to create content that easily transitions between services and platforms, as well as how to create memories for users that ripen over time and engage your consumers with your business.

XD also teaches you how to connect intent and meaning, comprehend how information is presented on the website, and construct icons, videos, FAQs, buttons, forms, and surveys.

“Those who prepare for the future own it,” Malcolm X said.

Posted in APMGTagged APMGLeave a Comment on Web 3.0 and the Metaverse are quickly approaching- these are all the Digital Skills you’ll need to get Successful

Everything You Need to Know About NSX V2T Migration

Posted on May 19, 2022May 20, 2022 by Marbenz Antonio

Automating NSX-V to NSX-T Migration with vRealize Automation 8.3

Today’s businesses are being transformed by software. Modern apps are making a significant contribution to company innovation, transformation, and faster service delivery. Modern business models demand dynamic apps that are spread across various clouds. Apps must be adaptable to diverse computing platforms, such as containers, virtual machines, and bare metal. Contemporary network architecture is necessary to assure connection and stability across various clouds to enable these modern apps. Networks must be scalable and adaptable, while still providing an excellent end-user application experience.

VMware has introduced a new version of NSX designed with contemporary apps and end-to-end security in mind to help enterprises build a network optimized for modern apps. Customers may use NSX Data Center (NSX-T) to take advantage of enhanced networking and security features to enable new business models and various use cases. By transitioning to NSX-T, existing VMware NSX for vSphere (NSX-V) users may realize increased business agility and higher network and application performance. Customers are advised to move to NSX-T in addition to the additional capabilities, as NSX-V passed End of General Support (EOS) in January 2022 and will reach End of Technical Support (ETS) in January 2023. Customers must act fast while migrating from NSX-V to NSX-T. This article will examine the NSX-V to NSX-T migration use case, advantages, difficulties, and customer journey, as well as how VMware Professional Services’ data center migration services may help you along the process.

Reasons to migrate from NSX-V to NSX-T

NSX-T was designed with current network requirements in mind. VMware NSX-T was developed from the bottom up for the hybrid cloud, to ensure networks can manage changes in application landscapes and security needs. A goal was also to protect against developing sophisticated network security attacks. NSX-T is designed to provide networking, security, automation, and operational simplicity for new application frameworks and architectures with diverse endpoint environments and technology stacks.

Cloud-native apps, bare-metal workloads, multi-hypervisor setups, public clouds, and many clouds are all supported by NSX-T Data Center. NSX-T is a software-defined infrastructure that enables the creation of cloud-native application environments. The NSX-T Data Center is intended for development companies to manage, operate, and consume. The NSX-T Data Center enables IT and development teams to choose the technologies that are most suited to their applications.

Among the many advantages of the NSX-T are the following:

  • Improved size, security, and operational simplicity.
  • NSX Distributed Firewall, NSX Distributed IDS/IPS, NSX Network Detection and Response, Network Traffic Analysis, and NSX Intelligence provide best-in-class security and enhanced protection against malware and ransomware.
  • NSX Federation enables advanced networking for public clouds with centralized network operations.
  • Advanced Container Networking, microservices micro-segmentation, and cross-platform visibility
    Automation of networking and security

Challenges with NSX-T adoption

The NSX-T is not the same as the NSX-V. As a result, transitioning from NSX-V to NSX-T is more complicated than just applying a patch or performing a small update. To properly relocate, network managers must first carefully assess their present network.

The following are the most typical customer migration issues:

  • Examining the present network structure and security services
  • Increasing awareness of network and security difficulties and interdependence
  • Developing and establishing a Migration Strategy.
  • Choosing an upgrade sequencing and execution plan
  • Keeping track of a large number of setups and workloads
  • Multiple distributed firewall rules are being migrated.
  • reducing downtime and data loss
  • Keeping data secure during and after transfer
  • determining the resources and timeline for planning and carrying out the move
  • Developing the necessary skills and knowledge
  • Validating the best migration path and how to carry it out
  • Using the appropriate automation tools
  • After-migration integration of current third-party goods and services

The migration journey

Migrating from NSX-V to NSX-T (NSX V2T) does not have to be difficult, but it should not be handled lightly. A network must be thoroughly examined to ensure that the necessary hardware and configuration are in place to handle the newly deployed NSX-T workloads. As with most things in networking, a little strategy may go a long way to ensuring a project’s long-term success. VMware Professional Services can assist you with your move.

Organizations that work with VMware Professional Services and our Master Services Competencies (MSC) partners begin their migration with an NSX V2T Migration Assessment Service. This transition evaluation analyses the current VMware NSX-V deployment and identifies the intended VMware NSX-T state for the future. The goal of this service is to evaluate the customer’s environment, identify preparedness for the migration path, offer a high-level overview of needs, and design the process flow.

VMware Professional Services will examine critical parameters and features in your present VMware NSX-V architecture, such as the following:

  • The number of data centers/sites and the number of hosts
  • Container networking services, contemporary applications, cloud-native apps
  • Security services, distributed firewalling, micro-segmentation, IDS-IPS, and NDR are all examples of security services.
  • VMware goods and applications are installed (e.g., vRealize Automation, VMware Cloud Foundation, VMware Cloud Director, VMware Integrated OpenStack)
  • Products and services from third parties
  • NAT (network address translation), BGP (border gateway protocol), OSPF (open shortest path first), static routing, load balancing, north-south/east-west firewall rules, and current network topologies are examples of features in use.

The VMware Professional Services team will go over the relevant use cases for your network deployments, such as Kubernetes and container networking support, networking and security automation, network detection and response against advanced threats, centralized policy management, securing VDI (virtualized desktop infrastructure), disaster recovery, micro-segmentation, IDS/IPS, moving workloads to/from the public cloud, networking and security analytics, and so on.

The Evaluation Service is given in three stages:

  1. Discovery. Obtaining critical information about your network and work environment
  2. Planning and analysis Evaluating current policies to determine what needs to be migrated and the best method for doing so
  3. Assessment summary report A paper describing the previous two processes and recommending a migration strategy

After you have completed the NSX V2T Migration Assessment Service, you will obtain a detailed list of:

  • Gathered network and security information from the surroundings
  • Migration needs and possible risks
  • Hardware expansion plans that are recommended (if required)
  • Recommendation for a migration approach
  • Recommendations for tooling-supported feature migration
  • Tooling recommendations for unsupported features
  • Workload migration recommendations

Based on the evaluation results, VMware Professional Services will sit down with you to plan and discuss several NSX-V to NSX-T migration approaches: coexist, in-place, or lift-and-shift. View our full NSX-T Data Center Migration Guide for a detailed overview of these many migration methodologies and tools available for migration. Whatever migration approach you pick, the following high-level stages are required:

  • Examine the changes between the NSX-V and the new NSX-T environments.
  • Create and assess a migration strategy and a rollback plan.
  • To migrate, review workloads, firewall rules, and load balancing setup.
  • Create and deploy an NSX-T environment.
  • Import current NSX-V setup
  • Configuration and network topology should be applied to the NSX-T target environment.
  • Integrate VMware with third-party systems.
  • Cutover must be tested and validated.
  • tidy up (delete and remove old configurations)
  • Exchange of information

There are a number of alternative NSX-V to NSX-T migration engagement models you may use depending on how you want to continue with your migration:

  • Customer-led and executed: You can use the Migration Coordinator Tool and/or VMware NSX Migration for VMware Cloud Director with this approach to migrate from NSX-V to NSX-T in a VMware Cloud Director environment without requiring VMware assistance on the actual migration, but you can get VMware Technical support on any tools you use. If you choose this strategy, it is highly advised that you have NSX-V to NSX-T migration training (for more information on VMware training and certification, see VMware Learning) as well as high-level networking and security skills.
  • Engaging VMware Professional Services and our Master Services Competencies (MSC) partners: For further information on the NSX-V to NSX-T Migration Professional Services Offering, please contact your VMware sales representative. Our service offering offers a variety of NSX-V to NSX-T services to help our clients along the way.

When you work with VMware Professional Service and MSC partners, you know you’re getting:

  • A tried-and-true migration process based on VMware best practices.
  • Complete migration and rollback strategy that includes the degree of work, potential dangers, and before and post-migration tasks.
  • A unified migration deployment across different datacenters/locations.
  • A successful and as frictionless transfer of existing workloads is feasible.
  • Existing load balancers can be migrated quickly and securely.
  • Existing policies and firewall rules were successfully migrated.
  • Risk and complexity have been reduced.
  • Migration periods are shorter.
  • Information transmission is now complete.
Posted in VMwareTagged VMwareLeave a Comment on Everything You Need to Know About NSX V2T Migration

SAP Cloud ALM or SAP Solution Manager for Test Management?

Posted on May 19, 2022 by Marbenz Antonio

My View, My Voice: SAP Application Lifecycle Management – That Was Then,  This Is Now - ASUG

SAP S/4HANA Cloud projects require extensive testing. You want to verify that your business processes continue to function as intended after go-live. In this blog article, We will discuss the fundamental differences between SAP Cloud ALM and SAP Solution Manager in terms of test management. With this information, you will be able to choose which ALM solution is best for you. This is another installment in my blog series ALM for SAP S/4HANA Cloud.

Test Cases

Knowing what to test is the foundation of test management. A list of all test cases is required to ensure that all cases have been conducted and that the results are good.

You may attach test cases to Process Management procedures in SAP Cloud ALM (see my corresponding blog). You may use this to develop test cases for these business processes. For each process step, a test activity will be produced. You decide which of them are subject to your examination. You may explain to testers the steps to be done while conducting the test activities using instructions and descriptions of expected results. Test cases, activities, and actions can be manually entered or uploaded using Excel.

SAP Best Practice information in SAP Solution Manager includes more than just process flow diagrams from which to generate test cases. It includes Word documents with test scripts for the individual best practice procedures as part of the solution documentation. The test scripts, on the other hand, are general. You may need to update them to suit your process design, just as you keep instructions and anticipated outcomes for test operations in SAP Cloud ALM.

Test Planning

You design the test execution after gathering the appropriate test cases. The preparation status in SAP Cloud ALM is set to Test Case Prepared. This makes the test case visible for testing. If you simply want to focus on the test cases allocated to a single requirement or user narrative, you may attach them to requirements or user stories. SAP Cloud ALM does not currently support the assignment of test cases to test plans or testers.

In addition to applying test cases to work packages and work items (which correspond to requirements), SAP Solution Manager and the Focused Build add-on allow you to gather several related test cases into test plans and test bundles. This allows you to create unique test case collections for specialized reasons, such as a work package. The ‘Released for Test’ state allows all test cases in the test package to be performed.

You may assign testers to test cases in SAP Solution Manager. This provides users with a clearer understanding of which test cases are included in their workload. It is also easy for you, as the test manager, to determine if a delay is to be expected.

Test Scope Optimization

Extensive testing might require significant effort. As a result, it is important to limit the test scope to those situations that are truly impacted by modifications. SAP Solution Manager’s Business Process Change Analyzer (BPCA) and Scope and Effort Analyzer make this possible (SEA). You may use BPCA to study the consequences of a technical change to discover which business activities in your solution landscape are affected. You can limit your testing to only those business processes or process steps that are affected by a change.

Screenshot%20of%20BPCA%3A%20Test%20Scope%20Optimization%20Ranking

Without the requirement to physically deploy software packages, SEA predicts major cost and effort drivers of maintenance programs. SEA can be used in the early phases of a project to plan software modifications and BPCA in the latter stages, during project implementation, or after the project enters steady-state operations.

SAP Cloud ALM still does not provide such in-depth analysis for test scope optimization.

Test Automation

Automation of testing processes results in significant cost savings. You reduce the need for manual testers in repeating the testing process. This impact will be increased by the number of test repetitions. Furthermore, you prevent additional expenses by repairing flaws in your production landscape, as well as project delays caused by defects discovered late in the implementation process.

A test automation tool is available in the public SAP S/4HANA Cloud. This program has predefined automated test scenarios that may be utilized and customized. The program is given with content. There is no need for manual creation.

The test automation tool would be used to manage automated tests on SAP S/4HANA Cloud. SAP Cloud ALM interfaces with this product to provide you with access to relevant test data (single source of truth). The integration APIs are open to the public. They may also be used by third-party test automation tools to interface with SAP Cloud ALM. Other solutions, such as SAP S/4HANA Cloud, private edition, should be tested using these tools. We are aware that test automation tools are working on interaction with SAP Cloud ALM, but we have yet to hear about the availability of such a situation.

Component-Based Test Automation is provided by SAP Solution Manager (CBTA). It supports most SAP UI technologies. You may easily develop modular automated tests that are simple to restore if they are broken by software changes. The script is created in the background by CBTA using reusable components that are simple to fix and maintain.

Partner tools coupled with SAP Solution Manager via the Test Automation Framework can be used to enhance the capabilities of the SAP Solution Manager test suite. It allows you to easily connect certified test automation technologies from partners and third-party sources with SAP Solution Manager. This allows you to quickly test non-SAP apps as well as those SAP UI interfaces that CBTA does not currently support.

Tricentis Test Automation for SAP Solution Manager is available to SAP customers that have a valid SAP Enterprise Support maintenance contract (TTA). This is a separate material of the maintenance contract, not part of the SAP Solution Manager use permissions. However, TTA and SAP Solution Manager connectivity is available, allowing you to use both for automated testing of SAP S/4HANA Cloud and SAP S/4HANA Cloud, private edition.

Test Result

Following test execution, you should get an overview of the test results. This will show if the quality is good enough to begin further operations such as a go-live. A Test Execution card is seen in the SAP Cloud ALM project overview and offers an overview of the test case status. You may then drill down for more information. The requirement traceability tool also provides test execution information. Also, the Test Execution Analysis may be utilized to monitor the progress of test execution over time. If a test fails, you may generate defects and track how they were resolved.

Test%20preparation%20and%20execution%20status%20in%20requirement%20traceability%20app%20of%20SAP%20Cloud%20ALM

The Test Suite Dashboard in SAP Solution Manager and the Focused Build add-on offer additional information about the state of testing. Individual test plans or waves can have their status and flaws viewed. This provides a more detailed view than simply monitoring the test case status of the entire project. Defect management is also available. In Focused Build, test cases and defects are assigned directly. This is not yet accessible in SAP Cloud ALM.

For regulatory reasons, certain businesses demand electronic signature confirmation of positive test findings. This is not yet possible in SAP Cloud ALM; it is only possible in SAP Solution Manager.

Posted in SAPTagged SAPLeave a Comment on SAP Cloud ALM or SAP Solution Manager for Test Management?

How to Drive Data-Driven Business Transformation: As-Is vs. To-Be Process Models

Posted on May 19, 2022 by Marbenz Antonio

Huawei's Digital Transformation in the New Normal - Huawei Brasil

As-is process models are manually generated representations of how individuals – experts or process participants who deal with the process daily – believe a process truly functions in conventional Business Process Management (BPM). To-be models, on the other hand, explain how a process should, ideally or pragmatically, run in the future. The differences between the as-is and to-be models of the same process may then be used to determine the adjustments that must be made to the existing process to attain the intended future behavior. The emergence of process mining presents certain concerns with this viewpoint:

  1. Can a manually constructed process model that is not based on execution data still be regarded as an as-is model, given that we can generate process models from event logs?
  2. Even though such a process model may still be treated as-is, isn’t it inferior to a mining model, and shouldn’t the name reflect this?

We set out to clarify these problems in this blog article, while also considering comparable difficulties around the concept of a to-be process model.

As-Is: Beliefs, Experiences, Data, and Knowledge 

The goal of as-is process models is to capture how a process works in practice. However, as-is models do not have to replicate all of the intricacies of real-world behavior of all process instances. Given the famous George Box comment that “all models are flawed, but some are helpful,” an as-is model may, for example, merely reflect the happy path through a process to provide readers with a basic idea of how the process works at a glance. As-is process models can be developed using expert input, feedback from process participants, event log data, or a combination of the three:

  • Experts may specify how a process is probably carried out by one or more IT systems.
  • A group of process participants can share their subjective experiences of performing the process in collaboration with IT systems and human actors. These participants may be internal (typically employees) or external (generally customers) (e.g., customers or suppliers).
  • An event log of the process may be pulled from the necessary IT systems to build a process diagram automatically.

In all cases, the type of model is a stand-alone model with different strengths and weaknesses:

  • While the experts may have a good understanding of the process and related IT systems and roles, their model may not accurately capture the aspects of IT system behaviors, as well as human decision-making and cooperation “on the ground.” It just represents the experts’ opinions on the procedure.
  • Participants in the process help the development of a model that reflects human experiences, which must be acknowledged. However, people may report process behavior from the perspective of their subjective human biases, such as their own career goals. Additionally, complicated IT system behavior may not be understood or even obvious to participants.
  • Event logs may accurately represent process behavior that occurs in specific IT systems, but they ignore important aspects of human behavior, such as workarounds humans may use outside of the context of the IT systems to speed up the process or facilitate process outcomes that are desirable for human participants. Also, judgments about the abstraction level of the process must be made both when extracting data for event log creation and when evaluating event log data: first concerning event granularity and subsequently about the number of variations that should be necessitated by the model.

Models that primarily use expert and process participant feedback are based on beliefs and experiences, whereas data-driven models are based on records. Only by integrating human-centered and data-driven viewpoints can we secure process knowledge; and, in any event, the amount of trust we have in a process model changes from instance to situation. The value of human experience is reflected in SAP Signavio’s Journey Modeler and Journey-to-Process Analytics tools.

To-Be: Standardized Reference and Individualized Target Behavior

We discussed above how well an as-is model reflects real-world process behavior. Therefore, we may analyze how well a potential model captures desired, attainable, and viable target process behavior. A future process model, for example, may represent the standard process that an enterprise software module implements or it could mirror an organization’s heritage process, although with a higher degree of automation. As a result, to objectively analyze the condition of a future process model, one might look at how well the model reflects:

  • How to process participants and stakeholders would want to work (desirability).
  • How the organization may function while taking into account technological and socio-organizational restrictions (feasibility).
  • How the organization should function while keeping business restrictions and objectives in mind (viability).

Additionally, we must evaluate the to-be model’s use case, which influences the planned granularity and breadth. To-be models with the use case of automation, for example, may have more technological features than models produced for risk and compliance purposes, which may include descriptions of risks and controls that an ‘automation’ model does not. The use case consideration also extended to existing process models.

Naming

There are alternative name pairings for as-is and to-be. Current and future states, in particular, are commonly employed. For the following reasons, we advocate using as-is and to-be:

  • It is commonly used terminology.
  • It is pretty interesting.
  • It does not employ the term state, which in engineering language usually refers to the state of running instances, i.e., to instance-level properties rather than model-level attributes.

Conclusion

To determine whether a process model is indeed fit for its purpose, we recommend analyzing its limitations, i.e., whether the human or data-based input for as-is models accurately reflects reality, and whether the technological, socio-organizational, and business contexts of a process have been fully considered when developing a to-be process model. In a data-driven environment, we have presented a set of guidelines for how to think about as-is and to-be process models. We especially advise utilizing as-is and to-be as descriptors of a model’s purpose rather than its information origins.

Posted in SAPTagged SAPLeave a Comment on How to Drive Data-Driven Business Transformation: As-Is vs. To-Be Process Models

Posts navigation

Older posts

Archives

  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • March 2020
  • December 1969

Categories

  • Agile
  • APMG
  • Business
  • Change Management
  • Cisco
  • Citrix
  • Cloud Software
  • Collaborizza
  • Cybersecurity
  • Development
  • DevOps
  • Generic
  • IBM
  • ITIL 4
  • JavaScript
  • Lean Six Sigma
    • Lean
  • Linux
  • Microsoft
  • Online Training
  • Oracle
  • Partnerships
  • Phyton
  • PRINCE2
  • Professional IT Development
  • Project Management
  • Red Hat
  • Salesforce
  • SAP
  • Selenium
  • SIP
  • Six Sigma
  • Tableau
  • Technology
  • TOGAF
  • Training Programmes
  • Uncategorized
  • VMware

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

home courses services managed learning about us enquire corporate responsibility privacy disclaimer

Our Clients

Our clients have included prestigious national organisations such as Oxford University Press, multi-national private corporations such as JP Morgan and HSBC, as well as public sector institutions such as the Department of Defence and the Department of Health.

Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
  • Level 14, 380 St Kilda Road, St Kilda, Melbourne, Victoria Australia 3004
  • Level 4, 45 Queen Street, Auckland, 1010, New Zealand
  • International House. 142 Cromwell Road, London SW7 4EF. United Kingdom
  • Rooms 1318-20 Hollywood Plaza. 610 Nathan Road. Mongkok Kowloon, Hong Kong
  • © 2020 CourseMonster®
Log In Register Reset your possword
Lost Password?
Already have an account? Log In
Please enter your username or email address. You will receive a link to create a new password via email.
If you do not receive this email, please check your spam folder or contact us for assistance.