• Courses
    • Oracle
    • Red Hat
    • IBM
    • ITIL
    • PRINCE2
    • Six Sigma
    • Microsoft
    • TOGAF
    • Agile
    • Linux
    • All Brands
  • Services
    • Vendor Managed Learning
    • Onsite Training
    • Training Subscription
  • Managed Learning
  • About Us
    • Contact Us
    • Our Team
    • FAQ
  • Enquire

OUR BLOG


Category: Red Hat

How open source may help CIOs create the future they want

Posted on May 16, 2022 by Marbenz Antonio

17 Industry Leaders Share Pros And Cons Of Tapping Into Open-Source Code

Marc Andreessen saw that software was consuming the world a little over a decade ago. We can clearly update his quote: “Software ate the world.” The software has taken over our businesses and how we create value for our consumers.

Consider this: We are a software factory, and we are assisting you in becoming a software factory. You can create the future you want with these powers. With hybrid cloud flexibility, you may pick where to execute your apps based on your business needs (grounded in Linux and open source). Using a shared platform like OpenShift, find repeatability in software production workflows to prevent hand-crafted errors. You may use Ansible to automate distributed system complexity and technologies like ACS to de-risk security in software supply chains from development to production.

Your business is software, and no one could be in the software industry without developers. A high-velocity development team is distinguished by its ability to move quickly from experiment to production. When dealing with the edge, this talent becomes even more vital.

Some of our data-overload difficulties are comparable to those we’ve faced in software development, but we’re now working in a new, more sophisticated environment. In other words, whereas software ate the world, AI is now devouring software.

This is happening due to the fact that so much software is now connecting with the outside world. Businesses seek data insights in the same way that they do with software. Businesses want to be more data-driven and are embracing data and AI to get there – as indicated earlier, this is how we can allow our people to make smarter decisions. We can not only make better use of data to make better decisions, but we can also provide better customer experiences by embedding intelligence in the goods and services our consumers use.

What do you think? Red Hat had similar issues, and as we sought to meet our own requirements – or scratch our itch – we realized we weren’t alone. True to our roots, we made our work public by launching a community initiative. And it was there that we could share our knowledge of data science and machine learning with clients and partners. Open Data Hub is a plan for creating an AI-as-a-service platform.

It serves as the cornerstone for the Red Hat OpenShift Data Science platform, which we debuted last year. And, as we’ve seen, AI isn’t a one-and-done project. You’ve got your software development process in place. Do you have a plan for AI development?

Consider this: your source code is data, and your deployed apps are deployed machine learning models. The discipline of transitioning from source code via testing to large-scale software production is widely established. But, with AI, are you following the same process from development through deployment? And it must happen on a large scale.

If you think that the average organization has a few thousand apps, it also has thousands of machine learning models. As judgments become more reliant on AI/ML – I don’t know about you, but I want to have confidence in the model making those decisions before taking action.

Part of creating trust entails:

  • Collaboration – helping in the creation of the model
  • Transparency – understanding how the model was created
  • Auditability – determining what changes were made to models and their effects on the outcomes

The security provided by data centers and IT assets being safely behind the four walls of headquarters is long gone for CIOs. The cloud, lightning-fast computers, wireless network advances, and the development of far-flung yet critical remote activities have all contributed to this. However, the technological liberties we have today are not without their drawbacks. We believe edge computing will be transformational in this area.

Edge computing refers to the capacity to derive insights from data and act on them locally, where they are needed. Intelligent devices are pushing the limits of where computing may take place — on the ground, in space, and everywhere else there is a value to a company or, perhaps, humanity itself.

Edge computing may now take place at or near the actual location of either the user or the data source — whether that’s a speeding SUV on the highway, sensors monitoring a natural-gas pipeline in the middle of nowhere, or onboard a satellite circulating the planet.

That is the future of hybrid technology.

Your workloads may span the usual IT footprints of data centers, clouds, and the edge with Red Hat. We bring open source community innovation to you, giving you the freedom to select where and how you create and distribute your apps and machine learning models securely.

Let us discuss security. There’s a lot of danger out there: the Apache log4j flaws showed that businesses need to be cautious of what open source they’ve installed and how actively they’re monitoring it.

In practically all audited codebases, open-source is present. Now that I think about it, “open-source software ate the globe.” But your ubiquity makes you a target: software supply chain assaults aiming at exploiting vulnerabilities in upstream, open-source ecosystems increased by 650% year over year in 2021.

Without a doubt, this is a major concern for software firms like Red Hat. And, of course, for you: as you continue to use software to grow your business and differentiate yourself. It’s at the top of government agendas all across the world, especially with ransomware and protest-ware on the increase. Throughout the development lifecycle, we must guarantee that the integrity of software upgrades is secured and confirmed.

To put it another way, the key to adopting open source in the workplace is to be aware of what you’re using, where it’s being used, and how it’s being utilized. Obviously, Red Hat is concerned about the origin and security of the open-source code used in our products. We’re also creating and supplying tools for independent work.

Posted in Red HatTagged Red HatLeave a Comment on How open source may help CIOs create the future they want

Why containers, white-box devices, and SD-WAN are perfect for cloudifying the network edge

Posted on May 16, 2022May 16, 2022 by Marbenz Antonio

White boxes in the enterprise: Why it's not crazy | Network World

Containers, software-defined wide-area networking, cloud edge communications, and white-box devices

Compared to dedicated bare-metal solutions, containerized Software-Defined Wide Area Networking (SD-WAN) on white-box x86 platforms offers a higher degree of flexibility and similar performance — especially when the goal is to deliver dynamic edge compute solutions with more outstanding security capabilities.

Container technology allows software workloads to be deployed anywhere, allowing telecom service providers and others to construct flexible SD-WAN networks that improve the security profiles of multi-cloud and edge cloud communications. Corporate applications in numerous public and private clouds might be connected with appropriate edge compute devices or uCPE at branch offices, weather-proof nodes connected to IoT devices, or 4G/5G edge devices in fleet cars, trucks, and heavy equipment, or boats, for example.

Edge computing devices are easier to acquire and manage using white-box technology, which reduces outlays for spare parts and recurrent maintenance expenses, while container technologies enable deploying or upgrading software in edge devices simpler and easier.

A multi-vendor setup for performance testing containerized SD-WAN

Intel, Red Hat, Turnium, and TietoEvry partnered on lab-based testing in the fall of 2021 and early 2022 to demonstrate the benefits of containerized SD-WAN utilizing Turnium’s SD-WAN Cloud Native Network Function (CNF), which is certified on Red Hat OpenShift and runs on Intel-powered servers. To implement, maintain, and test this software stack, TietoEvry offered a third-party testing environment, traffic simulators, and technical employees.

The purpose of the test was to see how dedicated, fixed-function deployments compared to software-defined, containerized deployments performed.

The performance of two edge types of technologies was compared in the research:

  • A cloud-native containerized deployment of Turnium’s SD-WAN as a containerized application deployed, controlled, and managed by OpenShift on an x86-based Universal CPE (uCPE).
  • Turnium’s SD-WAN software image was immediately deployed on top of the uCPE operating system in a bare-metal deployment.

These edge nodes, also known as uCPEs, interacted with core nodes using Turnium Aggregator software, which was installed on:

  • A public overlay network connects the aggregators and uCPE, simulating a public/private cloud offshore from the edge nodes.
  • A simulated local aggregator is located on the same private network as the edge nodes or uCPE.

Test results and concluding remarks

The performance testing revealed that the containerized SD-WAN met the needed performance while also providing improved scalability, flexibility, simplicity of deployment, and automation for large-scale edge computing solutions.

It was simple to add new nodes to the network automatically throughout the test. Automated network programming and features, such as wireline and wireless bonding and failover, made it possible to swiftly set up a resilient network. Deploying OpenShift demonstrated that controlling numerous edge devices at scale was a trivial process, and separating the software from the hardware using commercial off-the-shelf (COTS) hardware made it easy to size the device for core and edge nodes.

In terms of performance, tests revealed that OpenShift had a little influence on system load, adding about 2% more CPU demand as compared to running the SD-WAN implementation on bare metal. This low marginal CPU effect allows companies and corporations to profit from edge compute and public or private 5G deployments by allowing them to build and run more containerized applications within the uCPE.

By combining computing at the edge with technologies like Turnium’s containerized SD-WAN platform and Red Hat OpenShift, it’s easy to quickly install workloads or apps to take advantage of the low cost and high processing power of x86 COTS equipment.

Posted in Red HatTagged Red HatLeave a Comment on Why containers, white-box devices, and SD-WAN are perfect for cloudifying the network edge

Red Hat Satellite 6.10.5 is now Released

Posted on May 10, 2022May 13, 2022 by Marbenz Antonio

Por que sua empresa precisa do Red Hat Satellite?

Red Hat Satellite 6.10.5 will be generally available on March 29, 2022, according to Red Hat.

Red Hat Satellite is part of the Red Hat Smart Management subscription, which makes patching, provisioning, and subscription management of Red Hat Enterprise Linux infrastructure easier for organizations.

This release’s erratum contains the following:

  • https://access.redhat.com/errata/RHSA-2022:1708

  • https://access.redhat.com/errata/RHBA-2022:1706

Customers who have already updated to Satellite 6.10 should refer to the errata for further information. Customers who are running versions of Satellite prior to 6.10 should see the Red Hat Satellite Upgrade and Update Guide. If you’re upgrading from Satellite 6.x to Satellite 6.10, you might want to consider utilizing the Satellite Upgrade Helper.

Customers who have received hotfixes should verify the list of problems resolved before updating to ensure their patch is included. If you have applied a hotfix that does not match to one of the defects listed in the errata link(s) above, please contact Red Hat Support to confirm that you can apply the patch to Satellite 6.10.5.

Posted in Red HatTagged Red HatLeave a Comment on Red Hat Satellite 6.10.5 is now Released

Extending Compliance Automation for process improvement with Compliance as Code

Posted on May 5, 2022May 13, 2022 by Marbenz Antonio

How to Meet Compliance with Automation | ProcessMaker

Supply chain interruptions, intellectual property theft, and the escalating cost of data breaches are just a few of the reasons for a sharp increase in worldwide cybersecurity compliance.

Regulated sectors have more strict regulations, and some firms increasingly use third-party audits to ensure compliance with cybersecurity guidelines rather than relying on internal personnel. The same standards may be used by non-regulated sectors to lower their security risk. Compliance automation is becoming increasingly critical as security professionals’ workload grows.

Why automate compliance in the first place?

Breach of data is costly. According to several statistics, the typical cost of a data breach is in the millions, and security professionals are already overworked. This is a solid argument for adopting automation to help compliance efforts.

Automation is the most practical way to improve your compliance activities due to understaffing and tight labor markets. Compliance automation is an important part of managing work and decreasing risk. Compliance as Code, an open-source initiative, provides tools to assist with this. To assist in verifying needed system configurations and remediating as necessary, security automation material is offered in SCAP, Bash, Ansible, and other forms.

About Compliance as Code

The Compliance as Code project on GitHub was born out of a partnership between government agencies and business suppliers to make Security Material Automation Protocol (SCAP) content more available to users. Since its beginning in 2011, the project has grown to integrate commercial security profiles such as PCI-DSS and CIS, as well as current automation technology.

Today, the Compliance as Code initiative provides commercial suppliers with general-purpose security content and building tools that they can swiftly develop and cooperate on. We’ve leveraged these skills to provide value to customers through automated compliance solutions. Due to the nature of the reports and procedure, compliance reporting might be difficult. It takes time and effort to ensure correct results in a spreadsheet, and it frequently repeats labor. Automated report production may boost productivity and get repeatable findings into the hands of consumers and contributors in less time.

A new approach to compliance reporting

Organizations, particularly those in regulated sectors, are frequently required to get an Authority to Operate (ATO) before they may install and utilize software in their environments. A Security Requirements Guide (SRG), which is a collection of technical controls like those contained in the National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53, is used as part of this procedure.

This assessment determines if the software meets, does not meet, or can be adjusted to meet each control, as well as whether the control is applicable to the program in question. Other text-based information may be requested depending on the determined status.

To describe how to validate status, the evaluators may need to give manual instructions or code. They may also need to offer the code required to configure the program to satisfy a certain control to achieve that configuration. A Security Technical Implementation Guide (STIG) is the end result of this exercise: a configuration standard containing cybersecurity criteria for a given product.

When spreadsheets are involved, the already difficult task of developing STIGs becomes even more difficult. The US Defense Information Systems Agency (DISA) publishes spreadsheets including the security requirements for specific software as well as all the fields that may or may not need to be filled out depending on the status of each control, which can number in the hundreds. The following are some of the specific issues that a company can face when working on that spreadsheet:

  • Maintaining a record of who is doing/has done what
  • What fields must be filled out based on the control’s current status?
  • Assuring proper content layout
  • Assurance of quality

By automating the STIG creation and verification process, Red Hat is simplifying and streamlining Security Requirements Guide (SRG) processes to send Security Technical Implementation Guides (STIGs) to clients quicker and more efficiently.

The Compliance as Code codebase has been improved to generate STIG material based on previously validated tests. With automatic comma-separated values (CSV) file creation, the STIG content now inherits the test procedure that is already in place for Compliance as Code content and eliminates any mistakes.

Red Hat has begun the process by simplifying SRG processing, but it has no plans to stop there. Many of the same issues arise in other populations. We plan to use frameworks that apply to clients all around the world and across sectors to deliver holistic solutions. Compliance as Code is a place where people can collaborate and improve current solutions to better serve their customers and the community.

Posted in Red HatTagged Red HatLeave a Comment on Extending Compliance Automation for process improvement with Compliance as Code

Red Hat Identity Management Installation Automation

Posted on April 25, 2022April 29, 2022 by Marbenz Antonio

Installing Identity Management Red Hat Enterprise Linux 9-beta | Red Hat  Customer Portal

All system administrators should be slackers. Not in the sense of not performing their work, but in the sense of doing it as effectively as possible. Why bother doing things by hand when you can automate them? The more difficult work is, the more motivation there is for it to be automated.

When it comes to Identity Management, it makes sense to automate the process. Red Hat Identity Management (IDM) is simple to set up, but you’ll need more workstations as your environment grows. You’d definitely have an intranet and a DMZ in a conventional data center, and your servers would be divided into development and production. The person who should have access to your servers will most likely be organized into groups as well, with database, web, and application administrators being among them. Not to mention your system administrators, who require complete access.

Why should you go with automation instead of a manual installation?

Trying to maintain administrative files like sudo and access.conf, depending on the size of your environment, might take a long time. You must make adjustments to all of the relevant servers when adding or removing a user. This might take time and lead to a decrease in production. If a person’s access is valid after they leave, it might also create a security hole. IdM can take care of this for you, and automating the installation and setup saves you time when it comes to getting your environments up and running.

Setting up Automation

I’ll use a basic IDM environment in this example. It comprises a single IdM domain with two RHEL 8.5 IdM servers. The first is the main, which contains the certificate authority (CA), and the second is the replica, which also serves as a backup CA.

The developers established an upstream project called ansible-freeipa to make IdM management easier, and this has been incorporated into the sources. “But Ansible is a subscription,” you might be thinking, but Red Hat Enterprise Linux (RHEL) includes the core Ansible components, allowing you to use it to run playbooks created or generated by Red Hat products like RHEL System Roles, OpenSCAP, or Insights remediation playbooks, and ansible-freeipa playbooks.

The environment

We’ll need three virtual machines because this is a basic setup. The primary IdM server will be one, the replica will be another, and the third will be used for Ansible and will also be a client. All three servers have the identical setup of two vCPUS, 4GB of RAM, and 30GB of storage.

We’ll install Ansible on the IdM client because it only has to be installed on one system. We’ll need to add the ansible-2.9-for-rhel-8-x86 64-rpms repository to RHEL versions 8.5 and earlier. This repo comes with the RHEL subscription and is used to automate RHEL applications. Ansible Engine has been replaced by Ansible Core in RHEL 8.6 and later. You won’t need to enable a separate source to obtain this because it’s included in the App Stream source.

Your repository list should look like this:

Automating Red Hat Identity Management installation

Ansible, ansible-freeipa, and the RHEL system roles must all be installed. The ansible-freeipa package can be found in the rhel-8-for-x86 64-app stream-rpms repository and contains the modules required to automate IdM installation and configuration. The nameserver entries will be updated to point to the IdM server using the RHEL system roles.

Automating Red Hat IdM installation

We may begin working on the inventory and playbook files when the packages have been installed.

Inventory file

The inventory file in Ansible’s default installation is named hosts, and it’s located in /etc/ansible. This file will include all of our host information and variables. If you are used to the INI format, you will see that I will be utilizing the YAML format.

You’ll need to change the host’s file and add three specific groups: ipaserver, ipareplicas, and ipaclients to establish our IdM main server.

Your principal IdM server is the ipaserver group. The CA revocation service is located here. All of the IdM replicas you’ll be installing will be in the ipareplicas group. Last but not least, there’s the ipaclients group. This group is only for customers who will be using the ipaclient package.

Red Hat IdM installation

Base, Server, Client, DNS, and Special are several types of variables.

The base variables are those that are necessary for a basic IdM installation. Passwords, as well as the realm, domain, and hostnames, are included.

Server variables include parameters to set up, such as KRA and DNS, as well as those required to set up the IdM environment, such as the initial user and group id numbers, and do not redirect the Web UI.

Client variables are oriented for clients and can include things like whether or not to configure ssh, sudo, or NTP on the client.

DNS variables, as the name implies, are DNS options. Allowing zone overlap, creating a reverse DNS, or configuring forwarders.

Finally, there are the special variables. These variables include settings for the firewall zone to employ, as well as a few more that don’t fit anyplace else.

There are also Certificate System, AD Trust, and SSL variables, but we won’t be utilizing any of them because we’re performing a simple setup.

We’ll also add a few new variables, such as IdM short, root domain, and nameserver, to make any future additions or modifications to the playbook easier. Our FQDN will be created by joining IDM short and root domain, and the nameserver variable will be used to update our DNS and point to the IP of our principal IdM server.

Each host will have its own set of variables, but we’ll build global variables out of those that will be used several times and add them to the additional variables.

The ipaadmin password and the ipadm password must be added. We’ll be able to log in to both the Web UI (IPA admin password) and the LDAP (ipadm password) using this. For further protection, these passwords should be distinct from one another.

As a result, the global variable section should seem as follows:

Automating Red Hat IdM installation - global variable section

We’ll move on to the ipaserver variables now that we have our global variables. We’ll put them in the host category. We’ll need to remove the “null” item at the end of the host entry because we’re adding variables to it. There are no host variables if this item is present.

We’ll utilize the idm short and root domain variables we created before to build the ipaserver domain variable, which will then be used for the ipaserver realm variable.

Next, we will set up the DNS by adding ipaserver_setup_dns, ipaserver_reverse_zones, ipaserver_allow_zone_overlap, and ipaserver_no_forwarders to our variable list. These will instruct Ansible to install DNS, establish a reverse zone, skip checking to see whether there is already a DNS record for our name and skip configuring forwarders.

Finally, we’ll inform the IdM server to which firewall zone the firewalls should be attached. You don’t need this if you just have public, but we’ve had customers who didn’t utilize public and couldn’t deploy replicas or clients as a result.

As a result, our server variable list should look like this:

Automating IdM installation - server variable list

We utilize a number of variables for replicas that are similar to the ones we use for the server. We don’t need to generate a new admin password, and we don’t need the domain or realm variables because they’re only configured on the server.

We’ll need to rebuild the DNS and firewall variables, as well as the backup CA variable since we’ll have one.

Instead of using ipaserver as our prefix, we’ll use ipareplica and include ipareplica setup ca create a backup CA.

We need to update the nameservers to point to the primary DNS so that the replicas may resolve the DNS entries of the new IdM environment. We’ll utilize the network system role to do this. For that, we already defined a global variable.

We’ll need to configure dhcp4, dns, address, and gateway4 because we’re utilizing a static IP. We’ll utilize the information gathered by Ansible for both the address and the gateway4, and we’ll reuse both the IP and the gateway. We’ll set the dhcp4 variable to no, and we’ll use the nameserver variable we defined previously with the ipaserver IP for the dns variable.

As a result, the following should be added to the vars section of our inventory file:

Automating IdM installation - vars section of inventory file

Finally, we get to what our customers require. We simply need to add the ability to establish a home directory when a user comes in for the first time and alter the nameserver as we did with the ipareplicas group, so it’s really straightforward.

Automating Red Hat IdM - change the nameserver

This is a rather modest inventory file, with only 29 lines for our setup. You can add as many replicas and clients as you like, however with the CA option, I wouldn’t add too many replicas because it might cause network congestion. We advocate having no more than three or four CAs, including the central server.

The inventory file should look something like this once it’s finished:

Completed inventory file

Playbook

The game plan is straightforward. Apart from the playbook, I’ll create one role that will allow PTR sync for both forward and reverse domains, which is disabled by default, whether the install is done manually or through automation. PTR records will be produced if PTR sync is enabled, so you won’t have to do it manually afterward.

To establish the directory structure I require for this job, I first use the mkdir command.

Automating Red Hat IdM create the directory structure

Then I create the main.yml file and use ipadnszone to ensure both allow_sync_ptr and dynamic_update are enabled on both the forward and reverse zones.

Automating IdM installation create the main yml file

All that’s left is to write the actual playbook we’ll be utilizing. Three ansible-freeipa playbooks (install-server.yml, install-replica.yml, and install-client.yml) and one RHEL system role will be used for this role (network). The DNS servers on both the ipareplicas and ipaclients will be updated first. They need to be able to resolve the ipaserver and its SRV records, thus this is done.

The idm-dnsconfig role will then be installed between the server and replica installs, allowing us to construct PTR records for the replica(s) and client(s) when they are added.

Create the PTR records

We still have a few things to finish before we can execute this playbook. We must guarantee that the Ansible host, rh8.blog.example.com, is capable of resolving the IP addresses of the IdM servers and clients. We’ll achieve this by adding them to the Ansible host’s /etc/hosts file.

If you’re running this as root, make sure to specifically ask pass = True in the ansible.cfg files. If you are using a different account, you will also need to set become=True, become_method=sudo, become_user=root, and become_ask_pass=True that are further down the file.

After you’ve completed all of this, you’re ready to run the playbook.

Run the playbook

Run the playbook image 2

Because we’re executing this as root, I’m not using the “privilege escalation” part of the ansible.cfg file. If your security policy prohibits root login through SSH, you’ll need to activate these and use a user with sudo capabilities. Because it requires root capabilities, you’ll also need to add “become: true” to the network task.

Posted in Red HatTagged Red HatLeave a Comment on Red Hat Identity Management Installation Automation

To fight a pandemic, the power of open source software and collaborative public health information – EIOS

Posted on April 21, 2022 by Marbenz Antonio

It would have been practically impossible to anticipate how the next year would play out in late 2019 when news stories about a mysterious respiratory ailment began to rise. Yet, headed by the World Health Organization, worldwide cooperation between multiple public health stakeholders was already in place, accompanied by a technical solution for global collaborative detection of threats to public health, their verification, and risk assessment (WHO).

The Epidemic Intelligence from Open Sources, or EIOS, the initiative is a global collaboration of public health stakeholders to develop a unified, all-hazards, “One Health” approach to early detection of public health threats from publicly available sources, their verification, risk assessment, ongoing threat monitoring, situation analysis, and communication between relevant stakeholders to provide actionable intelligence for decision making.

The EIOS community of practice is supported by the EIOS system, which is based on a long-standing partnership between the WHO and the European Commission’s Joint Research Centre (JRC) (EC). The EIOS integrates other systems and players, encouraging collaborative development of new public health intelligence capabilities and features.
On December 31, 2019, the EIOS noticed a surge in pneumonia cases, and by the end of 2020, the system had seen a significant increase in data volume, with over 26 million tagged items relevant to the pandemic. These 26 million articles account for over 85% of all articles on EIOS, and there was an evident scalability difficulty owing to the significant growth in traffic throughout the year.

The WHO was contacted by Red Hat as part of the company’s Social Innovation Program, which assists nonprofit organizations with open source technology projects that help tackle the world’s most important problems. The two companies chose to work together on EIOS, and the resultant engagement provided the EIOS team with various recommendations on how to grow the platform even further.

This was accomplished by assisting in the troubleshooting of known scalability difficulties across many teams through joint sessions, finding bottlenecks collectively, and integrating all viewpoints in the final recommendations. Demonstrating to EIOS how to establish a consistent and dependable system utilizing infrastructure as a code approach, including the use of Ansible to automate deployments, was one of the collaboration’s two key objectives.

Prometheus and Grafana, both utilized as monitoring systems in EIOS settings, were also deployed using Ansible.

Monitoring solutions in EIOS environments

During the hard times for public health practice on a worldwide scale, Red Hat’s Social Innovation Program provided timely and essential help. With the fast international spread of COVID-19, a global flood of data put public health specialists and the EIOS system they rely on under tremendous strain.

During a critical period for the system’s operation, Red Hat professionals proved invaluable. They also gave appropriate, fit-for-purpose open-source solutions and ideas for enhancing the EIOS system’s performance and reliability.

Posted in Red HatTagged Red HatLeave a Comment on To fight a pandemic, the power of open source software and collaborative public health information – EIOS

What you need to know about the deprecation of OpenSSH SCP in RHEL 9

Posted on April 21, 2022 by Marbenz Antonio

The deprecation of the SCP protocol in Red Hat Enterprise Linux (RHEL) 9 is one of the most critical security changes for OpenSSH.

The following are the adjustments we’ve created:

  • For file transfers, the SCP command-line tool defaults to using the SFTP protocol.

     

  • The newly introduced -O option can be used to restore SCP protocol use.
  • On the system, the SCP protocol can be completely disabled. Any attempt to utilize the SCP protocol will fail if the file /etc/ssh/disable SCP exists.

We’re making this move because the SCP protocol is decades old and has several security vulnerabilities and problems for which there are no simple fixes. New flaws are disclosed often (the most recent as of this writing is CVE-2020-15778, but we can’t guarantee it will be the last), and fixing them all effectively is challenging due to the protocol’s fundamental trustworthiness of authenticated sessions.

As a result, some RHEL customers prefer to deactivate the SCP protocol entirely on their systems. Simultaneously, we have SFTP, a well-defined protocol that covers the majority of SCP’s use cases, therefore switching to the superior protocol makes sense.

Fix creation and adoption

Jakub Jelen, a Red Hatter who has maintained the OpenSSH package for numerous years and is extremely familiar with the toolkit’s internals, wrote the first patch that implemented the switch. Jelen’s fix was approved upstream with minor changes in 2021. It has now been updated with various compatibility adjustments to better match the SCP behavior and to accurately handle the corner situations that have been observed so far.

Despite the fact that upstream has put off switching to the SFTP protocol by default, we chose to make the move in RHEL 9. Because individuals who move to new major versions are more likely to predict such incompatibilities, a major release is the best time to implement such modifications.

Differences between SCP and SFTP protocols

There are significant distinctions between the SCP and SFTP protocols that we are aware of. When transferring files, the SCP program, for example, follows attached to the top but SFTP does not. This has been rectified upstream, and our product has been updated to reflect the changes. The glob pattern growth differs as well, but these incompatibilities will persist for the time being.

The extension of -based path processing is another distinction between the protocols. To deal with this expansion, OpenSSH 8.7 and subsequent versions provide a specific SFTP extension. Unfortunately, previous versions of RHEL do not support this extension, therefore transferring folders from a newer version to an older one would fail if path processing is utilized. The suggested solution in such instances is to offer absolute routes.

What should you do if this update has an impact on your system?

You have a few alternatives if this modification impacts your system. Upgrade the legacy system to a newer version of RHEL, if possible. If you can’t do that, you can use the SCP protocol, which requires the -O option to be specified explicitly.

However, if you use this option in your scripts, keep the following in mind:

  1. The SCP protocol is insecure compared to the SFTP protocol and poses some security issues (see CVE-2020-15778 as an example).

     

  2. It is intended to be removed in one of Red Hat Enterprise Linux’s forthcoming major editions.

     

  3. It won’t function if the SCP protocol is deactivated on the destination machine.

It is also conceivable and practical to use Rsync instead of SCP. For file transmission, Rsync employs its own protocol, whereas ssh is utilized for security.

Posted in Red HatTagged Red HatLeave a Comment on What you need to know about the deprecation of OpenSSH SCP in RHEL 9

Red Hat Certified Professionals can get Digital Certifications

Posted on April 21, 2022 by Marbenz Antonio

Red Hat certifications cover our complete technology portfolio, giving people and enterprises alike peace of mind. Red Hat understands the skills required for success, whether it’s corporate architectural certifications, fundamental Linux system administration abilities, developer expertise of specific frameworks, or new technologies like containers and cloud.

We want to help you celebrate your achievement with your professional network, friends, and family. Earning a Red Hat Certification is an industry-recognized milestone. As a result, we’re pleased to provide Red Hat digital credentials through Credly, our third-party issuing partner.

What’s a Red Hat Digital Credential?

To accelerate the adoption of Red Hat technologies and support customer success, Red Hat Digital Credentials recognize and reward learning successes, community contributions, and ecosystem participation. A digital credential is gained when certain requirements are met, and Red Hat’s digital credentials can reflect a variety of accomplishments. A digital badge from Red Hat is a sharable, verifiable, portable, data-rich version of that record.

When someone looks at your Red Hat Certification digital badge, they can learn more about your abilities and validate the badge’s validity, making it simple for them to authenticate your knowledge and training. You may also post your digital badge on social media or incorporate it in your CV, making it instantly recognizable to readers as a trustworthy Red Hat endorsement of your accomplishment. Sharing Red Hat-issued digital badges on social media, the web, and in other digital settings is the proper way to display Red Hat Digital Credentialing Program-supported achievements.

How can you get your digital Red Hat Certification badge?

Simply complete the steps below to acquire your new Red Hat Certification certificate and digital badge.

Here is what you need to know

You will get an email notification from Credly with information on how to accept and begin sharing your Red Hat digital badge after completing a Red Hat Certification test and changing your Red Hat account profile page to obtain a Red Hat Certification digital credential. It is entirely up to you whether or not you accept the Red Hat Certification digital badge.

Here is what you need to do

  1. Create a redhat.com account if you haven’t already.
  2. If you’re a Red Hat Certified Professional, you’ll need to link your certification ID to your redhat.com account login so that your certification can be verified or discovered in a search. The methods to link a certification ID to a Red Hat account are outlined here.
  3. Check the box to indicate that you want Red Hat to send you a Red Hat Certification digital badge and downloaded certificate while you’re doing step 2. This is necessary and implies that you give Red Hat permission to share your name, email and Red Hat issued Certification ID with Credly. After you’ve completed this, Credly will send you an email with instructions on how to accept and share your new Red Hat Certification digital badge.

Questions and Support

If your Red Hat Certification is up to date, Red Hat will issue a digital badge through Credly that will be available and valid until the certification expires. If your Red Hat Certification is about to expire, the digital badge will be updated to reflect this.

You can opt out of receiving a Red Hat digital badge at any time by returning to your redhat.com account login page.

Posted in Red HatTagged Red HatLeave a Comment on Red Hat Certified Professionals can get Digital Certifications

Emotional Intelligence is Needed for Digital Transformation

Posted on April 20, 2022 by Marbenz Antonio

A successful digital transition requires emotionally aware leadership. Consider the following professional suggestions to encourage and inspire your team.

It’s not only about technology when it comes to digital transformation; it’s also about people. CIOs are responsible for transforming a legacy-ridden world into a digital one. Meanwhile, IT personnel tasked with implementing this massive change may experience sadness and burnout.

The drive for speed may rip emotions, push intelligence to its limits, and put patience to the test. Pressure may test a CIO’s capacity to make informed judgments and push their emotional intelligence to its limits (also known as EQ, a term created by Peter Salavoy and John Mayer and popularized by Dan Goleman in his 1996 book, Emotional Intelligence).

We must be able to detect, analyze, and manage our own emotions as great leaders in order to effectively affect the emotions of others. Emotional intelligence improves our capacity to lead and helps us manage ourselves and others effectively. It also assists us in giving and receiving information, meeting deadlines, dealing with difficult relationships, working in resource-constrained contexts, managing change, and dealing with setbacks, failure, and, yes, success.

Digital Transformation: 5 Tips to Help your Team Thrive Amidst Change

With that in mind, consider the following essential considerations while developing and implementing digital transformation:

  • It isn’t merely a technological issue. The requirements of the business are critical. The essence of EQ is understanding stakeholders’ needs, mindsets, and willingness to change.
  • Employee perceptions and expectations are always shifting, particularly in the post-pandemic workplace. As an IT leader, how well do you know today’s workforce and user groups, their ambitions, and what they want from apps and services?
  • Learn about the processes of communication and change management. What are the barriers to adoption and change? How can you convey the advantages of change and make it simpler for people to accept it? Understanding these challenges necessitates a high level of EQ as well as maturity.
  • Be able to deal with criticism, pushbacks, disappointments, blockages, and other situations that need a high level of emotional intelligence (EQ) and maturity.
  • Always listen to your team and ask the correct questions if you want to take action. Leaders should be listening, learning, and watching at all times. Data is knowledge in a company, and people are knowledgeable. Sources of solutions include team members, partners, and customers.

Great CIOs listen to motivate their people to set and achieve goals. Emotional intelligence is required to assist the team in achieving corporate goals while also assisting individuals in achieving personal achievements.

The New CEO: Chief Empathy Officer

Overcoming hurdles is a necessary part of digital transformation. Discover answers, it involves conversation to reduce friction and complexity. Determine that each member of your team has what he or she requires to succeed and that the team as a whole is capable of driving transformation.

Asking open-ended questions is part of EQ. CIOs who make a difference pose the following questions to their teams:

  • What is it about IT and digital transformation that keeps you awake at night?
  • What can we do to make things easier for you?
  • How can we assist you in hastening your digital transformation?

It’s also important to know the negative side of EQ: a harmful language that tears people down and prevents progress. Remove “low EQ” terms from your vocabulary and examine the following suggestions:

  • Replace “Because I said so” with “How can we get you the assistance you need to make this a success?” instead of “Because I said so.” ”
  • Make an effort to be inclusive. “Your proposal is a good beginning point; let’s expand on that,” instead of “That is a bad or dumb idea.”
  • Avoid finger-pointing by cultivating a fail-fast culture. “Who authorized that idea/decision?” don’t ask. “Rather, ask yourself, “How can we make this decision work?”

People will amaze you if you start with trust and respect rather than skepticism.

Remember that technology must function for a digital transformation to be effective. However, without people, nothing functions.

Posted in Red HatTagged Red HatLeave a Comment on Emotional Intelligence is Needed for Digital Transformation

Evolution, Trends, and Insights into 5G Edge and Security Deployment

Posted on April 11, 2022 by Marbenz Antonio

As operators and the broader mobile ecosystem continue to invest in 5G technology, the Heavy Reading 2022 5G Network Strategies Operator Survey gives insight into how 5G networks may evolve. We’ll start by going through some of the results for 5G and edge computing, then go on to a 5G security viewpoint.

Parts of the poll were funded by Red Hat, including parts on service provider 5G edge computing plans and approaches to 5G security.

Drivers for 5G edge deployments

The healthcare, financial services, and industrial industries are driving current edge installations. According to Heavy Reading, the media and entertainment sector will be the next largest growth segment, with 66% of respondents saying they will deploy 5G edge services to these verticals in the next two years.

As the collated data shows, service providers’ priority is to reduce costs and improve performance. Reduced bandwidth utilization and cost were recognized by 63% of those polled as the most important financial driver, followed by stronger support for vertical sector applications (46%) and differentiated offerings vs the competition (43%).

Improved resilience and application performance were two significant requirements for edge deployments for smaller operators (annual sales less than $5 billion). Both of these criteria, according to respondents, have the impact of cutting costs and boosting customer satisfaction by making service level agreements (SLAs) simpler to meet.

Larger operators place a greater emphasis on specialized services and apps that can provide new income streams. The necessity to compete not just with other telecom service providers, but also with hyper scalers, may be the cause for the higher significance expressed by larger operators (68% vs 28%). Given that some service providers are wanting to work with hyper scalers to tackle edge deployment issues, this is an intriguing discovery.

Edge deployment options

Even though a range of edge deployment choices is available, the hybrid public/private telco cloud infrastructure is the most popular, with 33% of respondents choosing it. This is hardly unexpected, since it provides service providers with a healthy balance of ownership, control, and reach.

As Heavy Reading points out, service providers’ cultural hesitations about cooperating with hyper scalers are disappearing, due to the rapidity with which hyper scalers can deliver edge installations.

Some service providers have decided to deploy at the network’s edge or on-premises, and this appears to be focused on private 5G potential. Private 5G for mining is an important market for US tier 1 service providers, and multi-access edge computing (MEC) is viewed as a crucial enabler for private 5G.

The use of container-based technology at the edge

Linux containers enable software to be packaged with all of the files it needs to execute while sharing access to the operating system and other infrastructure resources. This setup allows service providers to relocate the containerized component across environments (development, test, and production), as well as between clouds while maintaining full functionality. Containers have the potential to improve innovation and differentiation by increasing efficiency, resiliency, and agility.

In the context of edge deployments, however, many service providers find it difficult to use container-based technologies. The poll underlines the challenge of the shift to containers, with over half of respondents saying that fewer than 25% of their edge workloads are already containerized. This trend is expected to gain traction in the next years, with more than half of respondents anticipating that 51% or more of their workloads would be containerized by 2025.

Other complexities with edge deployments

The greatest challenge to existing edge deployments is the cost and complexity of infrastructure (55% of respondents). The integration and compatibility of ecosystem components get excellent marks as well (49%). Red Hat has maintained strong engagement with partners focusing on innovation for service provider networks to overcome the integration and compatibility problems.

We can enable the creation, testing, and deployment of partner network functions (virtual network function and cloud-native network function) through our testbed facilities, allowing for faster adoption and risk reduction. We test network features on a regular basis to verify that they will operate properly with our products.

Red Hat has also created a number of partner blueprints and reference designs that enable service providers to install pre-integrated components from a variety of vendors. We deliver a uniform and consistent cloud-native platform through our comprehensive portfolio, together with the essential functional components, automation, and integration services from our partners for complete operational readiness.

5G security concerns and strategy

Because of the more spread network design, more sophisticated devices, and a bigger number of attack surfaces, 5G network security is even more important. The survey identifies a variety of infrastructure features that service providers value in terms of security, such as the usage of trusted hardware and identity, as well as access management. Trusted hardware is a vital component for device endpoints when it comes to safeguarding the 5G edge.

Container orchestration security and continuous image security scan and vulnerability analysis, both of which confirm prior statements about container-based technology, also receive good marks. The top two objectives for service providers’ 5G edge security plans are trusted hardware and continuous image security scan and vulnerability. They’re also regarded as critical features for protecting endpoints.

The importance of zero-trust deployment and provisioning is also mentioned. In terms of consistent infrastructure provisioning for physical and virtual network operations (48%) and data in motion encryption, zero-trust comes out on top (46%).

While the majority of service providers are confident in their 5G security approach, concerns about maturity and scaleability exist outside of the United States. Internal resources and related skill sets are needed to properly implement a security plan that encompasses ever-changing threats, compliance needs, tools, and architectural alterations, according to these concerns.

Final thoughts and how Red Hat can assist

The edge extends opportunities, and service providers must migrate toward it to exploit new service and revenue opportunities, as well as network efficiency. To minimize inflexibility, systems must be able to adapt to changing demand and difficult-to-predict application use cases.

A stable basis for a range of network operations across any infrastructure may be provided by a single and consistent cloud-native platform that spans the network from core to edge. The Red Hat OpenShift Container Platform provides service providers with the deployment choices they need to extend their footprint to meet changing cost and environment demands.

Increased customizability, scalability, dependability, and portability are all advantages of a cloud-native approach to 5G network deployment. Red Hat OpenShift enables service providers to fully realize the benefits of cloud economics by accelerating the delivery of new 5G services and optimizing their operating model through simpler processes, lowering the total cost of ownership.

Service providers may include their preferred 5G software features and hardware from many manufacturers to meet their demands thanks to Red Hat’s vast partner ecosystem.

Posted in Red HatTagged Red HatLeave a Comment on Evolution, Trends, and Insights into 5G Edge and Security Deployment

Posts navigation

Older posts

Archives

  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • March 2020
  • December 1969

Categories

  • Agile
  • APMG
  • Business
  • Change Management
  • Cisco
  • Citrix
  • Cloud Software
  • Collaborizza
  • Cybersecurity
  • Development
  • DevOps
  • Generic
  • IBM
  • ITIL 4
  • JavaScript
  • Lean Six Sigma
    • Lean
  • Linux
  • Microsoft
  • Online Training
  • Oracle
  • Partnerships
  • Phyton
  • PRINCE2
  • Professional IT Development
  • Project Management
  • Red Hat
  • Salesforce
  • SAP
  • Selenium
  • SIP
  • Six Sigma
  • Tableau
  • Technology
  • TOGAF
  • Training Programmes
  • Uncategorized
  • VMware

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

home courses services managed learning about us enquire corporate responsibility privacy disclaimer

Our Clients

Our clients have included prestigious national organisations such as Oxford University Press, multi-national private corporations such as JP Morgan and HSBC, as well as public sector institutions such as the Department of Defence and the Department of Health.

Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
  • Level 14, 380 St Kilda Road, St Kilda, Melbourne, Victoria Australia 3004
  • Level 4, 45 Queen Street, Auckland, 1010, New Zealand
  • International House. 142 Cromwell Road, London SW7 4EF. United Kingdom
  • Rooms 1318-20 Hollywood Plaza. 610 Nathan Road. Mongkok Kowloon, Hong Kong
  • © 2020 CourseMonster®
Log In Register Reset your possword
Lost Password?
Already have an account? Log In
Please enter your username or email address. You will receive a link to create a new password via email.
If you do not receive this email, please check your spam folder or contact us for assistance.