• Courses
    • Oracle
    • Red Hat
    • IBM
    • ITIL
    • PRINCE2
    • Six Sigma
    • Microsoft
    • TOGAF
    • Agile
    • Linux
    • All Brands
  • Services
    • Vendor Managed Learning
    • Onsite Training
    • Training Subscription
  • Managed Learning
  • About Us
    • Contact Us
    • Our Team
    • FAQ
  • Enquire

OUR BLOG


Category: Linux

What’s new in the most recent Linux VDA version?

Posted on January 11, 2023January 11, 2023 by Marbenz Antonio

Citrix Workspace (@CitrixWorkspace) / Twitter

Linux virtual desktops have been demonstrated to be an optimal solution for developers, users of graphic-intensive software, and others. For instance, when organizations want to outsource programmers abroad and provide them with secure, remote access to development environments from any device, using a Linux virtual desktop with Citrix technology can be an ideal solution.

In this blog post, Citrix will discuss new features that were added in the 2212 Linux VDA release which greatly enhance the HDX user experience and simplify the way Linux virtual desktop environments are deployed and managed.

Support for RHEL 9.0 and Rocky Linux 9.0

The Citrix Linux VDA now supports Red Hat Enterprise Linux (RHEL) and Rocky Linux 9.0, starting with the 2212 release. This allows customers to take advantage of the latest features of these distributions while also experiencing the improved performance. Additionally, some customers can also benefit from RHEL’s Extended Update Support (EUS), which is now available as an add-on only for version 9.0 for Red Hat Enterprise Linux Workstation Standard and Premium subscriptions.

Session Recording for Linux: Now in Preview

Session Recording is a simple-to-deploy feature that enables screen-recording and event-capturing functionalities to assist in maintaining security compliance. With the 2212 release, Citrix has incorporated Session Recording in the Linux VDA (in preview). Enabling it will convert HDX-recorded sessions into Session Recording files. You can then download, and manage these files in the Session Recording server, and playback them through the Session Recording player.

It is important to note that this current preview feature only includes session recording and playback functionality, session recording policies and events are not yet available.

Screenshot of Linux Session Recording file playback
Linux Session Recording file playback

Improved 3D Graphics Performance

In addition to the selective H.264 hardware encoding feature, Citrix is continually working to enhance performance with 3D graphics. The 2212 release includes improved performance for both vGPU and remote PC scenarios.

Citrix is excited to announce improved data transfer efficiency between GPU and Linux system memory and also decreased latency in 3D graphics rendering and hardware encoding. These advancements optimize hardware resource utilization resulting in significant improvement in frames per second (FPS). For instance, in lab testing, we have seen that ¼ of the window size of a 4k display will yield 25 FPS and for a 2k display, it’s around 40 FPS. For further details, please refer to our H.264 hardware encoding product documentation.

USB Device Redirection Enhancements

Given the widespread use of USB devices, it’s important to provide the capability to connect required devices with the best possible performance to complete work. But previous USB redirection had some limitations such as a lack of support for USB 3.0 and some devices. By utilizing the USB/IP protocol, Citrix has made several improvements to USB device redirection:

  • Easier deployment: The USB/IP kernel module is typically included with Linux kernel versions 3.17 and newer and does not usually need to be manually built by administrators.
  • USB 3.0 support: Tests in the lab have demonstrated that redirecting USB 3.0 is significantly faster, by 100 percent, than redirecting USB 2.0.
  • Higher bulk transfer efficiency: Their laboratory experiments have revealed that bulk data transfer efficiency has improved by an average of 34%. This improvement is particularly notable in situations with high latency.
  • Support for more USB devices: The updated version of the software now includes support for several new devices: the TD-RDF5A Transcend USB device, the composite USB device, and the Yubico YubiKey OTP+FIDO+CCID.

New Features to Simplify Your Deployment

Extending Easy Install GUI to Include MCS Configuration

Before, the process of setting up the Linux VDA had multiple manual steps, which could lead to errors and prolonged problem-solving. Citrix now offers a graphical user interface tool called Easy Install to assist administrators in evaluating their system, installing necessary components, and navigating through the setup of domain-join and runtime variables. This tool has made the onboarding process more streamlined and enhanced efficiency for many of their clients.

With the release of version 2212, the Easy Install GUI has been enhanced to incorporate the capability to configure Machine Creation Services (MCS) settings. This allows administrators to use the GUI to set MCS variables, which is especially useful for Citrix Linux VDAs in deployments that do not rely on domain join and rely instead on MCS. This extension to the GUI feature has made the process of non-domain-joined deployment much simpler and easier.

New Database Options Now Available (Experimental)

Before, every Linux VDA deployment involved installing a PostgreSQL database. However, this was a complicated process and could cause conflicts with a developer’s existing PostgreSQL database service.

The 2212 release now includes an experimental feature that enables you to use SQLite in addition to the default PostgreSQL for your Linux VDA. You can switch between SQLite and PostgreSQL by modifying the ‘/etc/xdl/db.conf’ file after you’ve installed the Linux VDA package. Additionally, this release also supports the ability to customize the port number for the PostgreSQL database.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in Citrix, LinuxTagged Citrix, LinuxLeave a Comment on What’s new in the most recent Linux VDA version?

Linux OpenSSH Security Hardening Guide

Posted on May 11, 2022July 26, 2022 by Marbenz Antonio

14 SSH Key Management Best Practices You Need to Know - Hashed Out by The SSL Store™

On Linux platforms, SSH is one of the most extensively used protocols for system management. It’s compatible with a variety of Unix, Linux, and MacOS-based operating systems. It is based on the client-server concept, in which one computer executes the server component while the other accesses it using a client tool.

How does SSH work?

The connection is established by the user (ssh) submitting a request to the server. The server uses the server daemon to listen for incoming requests (sshd). It authenticates itself to the client attempting to connect to it by using its public key:

This ensures that the client is connected to the correct SSH server. After then, the client can connect to the server. To connect to the server on a Windows client, you’ll need to utilize tools like putty. Both the client and server tools may be installed on the same system, which means you can use the client tool to connect to other computers or your system can act as a server that others can connect to. The config file is stored in the same directory as the other files, but with a slightly different name. The ssh client config file is called ‘ssh config’, whereas the server’s config file is called ‘sshd config’:

No description available.

If you have both files on your machine, you should decide which one you need to configure first. In most situations, it is the server that requires security configuration because it is the gateway to the system.

We’ll start by verifying the status of our server’s SSH daemon, or sshd. We can detect if it’s running and allow it to start automatically when the computer boots up. The command following will check the status of sshd:

$ systemctl status ssh.service

Or use the below one:

$ systemctl status sshd.service

No description available.

If you have both files on your machine, you should decide which one you need to configure first. In most situations, it is the server that requires security configuration because it is the gateway to the system.

We’ll start by verifying the status of our server’s SSH daemon, or sshd. We can detect if it’s running and allow it to start automatically when the computer boots up. The command following will check the status of sshd:

Configuring SSH using the Best Practices

And believe it is now time to begin setting the SSH server setup. We should make a backup of the SSH config file with the default settings before we get our hands dirty with it:

$ sudo cp /etc/ssh/sshd_config ~/sshd_config.bkp

After performing the backup, we may be comfortable that if we make a mistake with the main file and break SSH, we can restore normality using the backup file.

1. Changing the default fort

By default, the sshd daemon listens on the server’s port 22. It is advised that this value be changed to anything else in order to limit the reach of automated script attacks. This strategy is known as security by obscurity. To do so, open the code below and search for the line that says ‘#Port 22.’

$ sudo nano /etc/ssh/sshd_config

Uncomment the line ‘#Port 22′ and replace ’22’ with a port number that is not already in use on your system. We had to modify the password to ‘222’ and restart the service. To provide the new port, use the ssh command with the option ‘p’:

$ ssh user@system_ip -p 222

No description available.

2. Disabling login as the root user

The root is the most powerful user on a Linux system, with access to all of the system’s resources. You should disable the root login feature on your server if you do not require tight root access. To do so, open the same file as before:

$ sudo nano /etc/ssh/sshd_config

Set the option ‘PermitRootLogin’ to ‘no’. This will guarantee that the server is safe against random assaults aimed at the root account. The default option is ‘prohibit-password,’ which allows for public-key authentication but not password authentication.

3. Setting the protocol version

SSH1 is the earlier protocol version, and it is less secure than SSH2. They also have distinct networking implementations and are incompatible with one another. Open the sshd config file again and search for the line ‘Protocol’: to see which protocol version is active on your server.

$ cat /etc/ssh/sshd_config | grep 'Protocol'

If you get an empty output, OpenSSH is most likely using version 2, as it was in our instance. Another option is to use the Netcat command:

$ nc localhost 22

Sample output:

SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.4

SSH2 is operational on our machine, as seen by the output.

Try connecting to a remote server using an ssh client with the -Q (query) option to see which protocols version it is running:

$ ssh -Q protocol-version user@server_name

No description available.

The screenshot above depicts an SSH version 2 connection from Kali Linux to an Ubuntu ssh server.

4. Password complexity

Empty passwords are more likely to be abused since weak passwords are always open to being exploited. As a result, the PermitEmptyPasswords option in the sshd config file should be set to ‘no’. Similarly, to lessen the probability of a brute force assault, the number of login attempts with incorrect passwords should be reduced. This may be accomplished by setting the ‘MaxAuthTries’ option to a lower number, such as 3.

No description available.

When we set the ‘MaxAuthTries’ setting to 3, we are refused SSH after three incorrect passwords, as shown in the image above. Using public key authentication for login is another important security feature. Brute force assaults are less resistant to key-based authentication methods. Similarly, we may utilize the PAM authentication module to strengthen the SSH server even further.

Conclusion

We’ve tried to cover the most critical aspects of protecting an SSH server in this article and condense them into four primary points. Although this article is not exhaustive, you can find more ways to enhance your SSH server. You might, for example, deploy a banner message to warn users about utilizing SSH to access your system. You can also disable password-based authentication and use key-based authentication instead. Another feature worth mentioning is the ability to limit the number of SSH users as well as their connection time.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in LinuxTagged LinuxLeave a Comment on Linux OpenSSH Security Hardening Guide

A New DNS Spoofing Threat Endangers Millions of Devices

Posted on May 11, 2022July 26, 2022 by Marbenz Antonio

Security News - Trending Topics | IT Security | Myra Security

Two prominent C standard libraries that offer methods for typical DNS operations have a major vulnerability that might lead to DNS spoofing attacks, according to security experts.

The vulnerability was discovered by Nozomi Networks Labs in the Uclibc and uClibc-ng libraries, which offer methods for performing typical DNS operations including lookups and converting domain names to IP addresses.

Uclibc is used by major vendors like Linksys, Netgear, and Axis, as well as Linux distributions like Embedded Gentoo, while uClibc-ng is “a fork specifically designed for OpenWRT, a common OS for routers that could be deployed across various critical infrastructure sectors,” according to the researchers.

At the time of writing, the vulnerability had not been fixed. That’s why Nozomi Networks Labs isn’t revealing the specifics of the equipment used to recreate the flaw.

Understanding DNS Spoofing Attacks

Domain Name Systems are a key component of the internet. They’re used by browsers to get the IP address of certain services. When you enter the browser consult a DNS service to find the appropriate servers.

Most DNS services are provided by default by ISPs, so customers don’t have to manually configure them, however, private DNS services can be purchased. You can even set your own DNS, but only if you’re sure what you’re doing, as trying to configure something you don’t completely understand can lead to security problems.

Threat actors frequently employ DNS spoofing or poisoning to add illegal IP addresses to the DNS server’s cache. The purpose is to send users to rogue servers controlled by hackers, where they may steal passwords or install malware. Even if it’s only transitory (for example, cache invalidation), it’s enough to compromise a large number of devices.

Because such rerouting is difficult to identify, you may feel you’re visiting your favorite website when you’re actually viewing a malicious replica.

MITM (Man In The Middle) attacks can affect DNS services. When authorities and governments seek to shut down unlawful websites, for example, they utilize DNS blocking to redirect visitors to a page explaining their actions.

The C Library DNS Vulnerability

Nozomi Labs discovered a trend in DNS lookups using C libraries (see screenshot below). The transaction ID is first incremental, then resets to 0x2, before being incremental once again.

As a result, hackers may guess transaction IDs and launch DNS assaults under specified circumstances.

To locate the core reason, the researchers looked into libuClibc-0.9.33.2 and discovered assignments that explained the pattern. A variable “initialized with the value of the transaction ID of the last DNS request” is utilized in the DNS lookup function.

It should be highlighted that knowing the specific source port and “winning the race against the valid DNS request” are required to exploit the issue, therefore this isn’t a backdoor or normal defective code.

It does not imply that the exploit is more difficult than normal; rather, it is dependent on a number of circumstances. Nonetheless, hackers may guess the source port and transaction ID, which are required for a DNS client to accept a DNS answer.

The code does not randomize the source port, as the researchers discovered. As a result, the poisoning attack can occur if the operating system employs stable or predictable source code, which is quite likely.

Unfortunately, even if the system randomizes the source port, attackers may still brute-force the port value, therefore this isn’t a fix.

How to Protect Against the DNS Threat

At the time of writing, there is no fix available, and even if there were, the time required to spread it across all possibly impacted devices would be enormous. The C library’s maintainer was unable to resolve the issue and has requested assistance.

Nozomi notified more than 200 suppliers about the issue 30 days before it was made public.

The compromised devices, according to the researchers, are “well-known IoT devices running the latest firmware.” Administrators should install the latest patches for all manufacturers and keep an eye on future firmware releases.

All actions to harden network and DNS security are advised from an IT standpoint. CISA has published a thorough guide that may be used to assess the situation.

It is critical to be alert from the standpoint of end-users. A quick URL change in the browser is the most visible indicator of a DNS assault.

You should definitely set your browser to always use HTTPS and watch for any signs of a false page, such as uncommon typos and language errors, or suspicious design eccentricities like a bogus logo.

Unfortunately, in other circumstances, the deceit is so clever, such as a flawless clone, that you won’t be able to see the hazard.

VPN providers are increasingly offering innovative security features that may effectively block known malware and minimize MITM attacks. In any event, trust your instincts and exit the domain if you see something unusual.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in LinuxTagged LinuxLeave a Comment on A New DNS Spoofing Threat Endangers Millions of Devices

Linux Lite 6.0, based on Ubuntu 22.04 LTS, is Now Available for Public Testing

Posted on April 27, 2022July 26, 2022 by Marbenz Antonio

Ubuntu 22.04 LTS and the Party | Tux Machines

Jerry Bezencon, the creator of Linux Lite, revealed today that the Release Candidate development version of the planned Linux Lite 6.0 distribution is now available for public testing.

Linux Lite 6.0 appears to be a considerable upgrade over the preceding 5.x series, not only because it uses the Ubuntu 22.04 LTS (Jammy Jellyfish) operating system series as its foundation, but also because of the numerous modifications it introduces. Linux Lite 6.0 will be powered by the long-term supported Linux 5.15 kernel family since it is based on Ubuntu 22.04 LTS.

To begin with, the distribution now uses the newest Xfce 4.16 desktop environment. Also, a new default window theme called Materia has been added, which includes both Light and Dark styles and tries to maintain the familiar aesthetic of prior Linux Lite releases while also supporting GTK4 programs and a broad range of desktop environments. The default icon theme is Papirus.

Another intriguing feature of the future Linux Lite 6.0 update is the addition of an on-screen keyboard (Onboard), a screen reader program (Orca), and a built-in screen magnification that can be activated by pressing Left Alt + mouse scroll.

Linux Lite 6.0 also includes new default software such as the System Monitoring Center system monitoring utility, Google Chrome web browser, and El-Torito ISO writer. Furthermore, the developer pledges to include the newest stable version of the LibreOffice office suite in upcoming major Linux Lite versions.

Among the notable changes in Linux Lite 6.0 is the addition of a redesigned GRUB bootloader menu that no longer includes the Memtest memory testing application, but instead displays restart and shutdown choices. It also includes an updated version of the Whisker Menu application menu, as well as a new in-house utility called Lite Patch for deploying emergency security updates.

On June 1st, 2022, the last Linux Lite 6.0 release will be available. Until then, you may download the Release Candidate (RC) version from the release announcement website to give it a test drive on your computer to see what new features and improvements it has. Please keep in mind that this is a pre-release version that should not be used in production settings.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in LinuxTagged LinuxLeave a Comment on Linux Lite 6.0, based on Ubuntu 22.04 LTS, is Now Available for Public Testing

Who is Creating Businesses on the Backs of Free and Open Source Software?

Posted on March 31, 2022July 26, 2022 by Marbenz Antonio

As a commodity and a business, free and open-source software holds a unique position. The program has zero marginal cost (to use an economics phrase) and is infinitely repeatable and easy to distribute (for those with decent Internet connection), but it needs some knowledge to build and significant expertise to successfully maintain.

This article looks at various businesses that make money by promoting free software. Let’s start with a sobering observation: detractors of free software have long said that you can’t make a living selling free software, and this is true. A community, not a firm, should own free and open-source software. Thousands of individual programmers, on the other hand, make a career by giving free all of their code, and open-source may even be used to build profitable enterprises.

From CD Stacks to VM Stacks

Every business has discovered how difficult it is to transform a concept into a finished product. One historical example is Dava Sobel’s book Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time, which chronicles the creation of the chronometer in the 18th century. We don’t know if Sobel picked the subtitle, but the book’s main takeaway is that the inventor did not solve the problem on his own. He was never able to create a durable version of his invention that could be mass-produced and marketed at a reasonable cost; that task was left to a later engineer.

When it comes to transitioning from source code to production-ready deployment, free and open-source software has its own set of obstacles. Cygnus Solutions, which helped build several programming tools for the GNU project, was one of the first firms to bridge the divide. Despite serving a small niche of programmers interested in the GNU platform, the firm was an essential element of the computer infrastructure in the late 1980s and early 1990s.

“We were setting a market price for all the preparation that needed to be done to find, collect, configure, test, document, distribute, and maintain packages of free software competitive with proprietary software,” said Michael Tiemann, the creator of Cygnus, years later.

Cygnus went on to create Cygwin, a free Unix-like environment that can be installed on Microsoft Windows. This package of tools was installed by many Windows users who valued the advantages of the Unix shell and utilities. Cygwin was the forerunner to Microsoft’s Windows Subsystem for Linux, which was introduced in 2019.

Red Hat was a more prominent and successful proponent of the two-phase concept of packaging a solid distribution of free software and following up with support. Tiemann revealed at a presentation about Red Hat that he saw the potential in the little company right away and attempted to buy it, but the Cygnus board and management refused. Red Hat, on the other hand, finally purchased Cygnus. Since then, Tiemann has held several leadership positions at Red Hat.

Transparency, inclusiveness, flexibility, collaboration, and community are the essential values of open organizations, according to Jim Whitehurst (currently President of IBM), who served as Chief Executive Officer of Red Hat.

Red Hat contributes to free software groups, such as the Java Spring framework, and produces its free software. Because a group of hackers named the CentOS project re-engineered Red Hat methods, anybody could operate a GNU/Linux system using the same versions of software present in Red Hat’s commercial edition, Red Hat Enterprise Linux, for a long period. CentOS was taken over by Red Hat after many years, and it currently exists as a form of test release for Red Hat, sitting between the more experimental Fedora project and the stable RHEL.

They declared that they would be going “up the stack,” concentrating on frameworks like Spring and other tools for today’s hot computing jobs, such as Web development. They’ve followed the computer industry into virtual machines and cloud computing, and they’re currently concentrating their efforts on their OpenShift container-based platform.

They occupied a relatively secure niche when it came to offering GNU/Linux systems to their clients, with just a few rivals such as Canonical (which maintains the extremely popular Ubuntu distribution) and SUSE. By abandoning this niche in favor of virtualization and the cloud, Red Hat and Canonical join a market dominated by genuine behemoths like Amazon, Google, and Microsoft, as well as VMWare and even IBM, who acquired Red Hat in 2019.

Companies that founded their strategy on something else created and shared the software. Alternatively, they may work in a field unrelated to computing, such as automotive, but design software to satisfy a personal need and then strive to establish a community around it.

Current open-source business

James Vasile and Karl Fogel, two very skilled free software programmers, manage Open Tech Strategies. They make the majority of their money by developing free software for clients. They also provide consultancy services to companies looking to develop an open-source strategy. Producing Open Source Software: How to Run a Successful Free Software Project was written by Fogel, and their firm created a list of archetypes for open source development for Mozilla.

One of LeadingBit’s main services is assisting companies in establishing an Open Source Program Office (OSPO). OSPOs are becoming a more beneficial investment for both companies and institutions. At opensource.com, a key news and debate site for the open-source movement, some of the tools and methods that might assist construct an effective OSPO are outlined.

An OSPO’s initial responsibility is to locate and record all of the free software that the corporation or college is employing. Because programmers smuggle it in without alerting management for some reason, many managers are unaware that they are using and even distributing free software. This is both unfair and dangerous, especially if the programmer includes code with a restricted license (essentially the GPL) in the company’s exclusive product. The masquerade can sometimes come to an end when a proprietary product generates an error message that alerts free software developers to the fact that their code has been stolen. Such embarrassments might occur if there is no openness and responsibility inside the firm.

Some other tasks of an OSPO include:

  • Creating and enforcing regulations for the use and creation of free software
  • Providing staff with time off to participate in free software groups outside of the company
  • Creating incentives for people to participate in and contribute to these communities
  • Creating a general framework for the usage of free software in the company

Bonewald is dedicated to improving the maturity of free software through strengthening open source communities and products. Accountability, contributor stability and maintenance, support availability, security checks, and gathering metrics to support all of those attributes are some of the features that move towards maturity.

Bonewald has also been working on a platform called IEEE SA OPEN for the past year, arguing that open source communities can learn a lot from standards creation. Well-known organizations such as the Apache Foundation, the Eclipse Foundation, the Linux Foundation, and the Savannah project of the GNU project fulfill this function.

The CLA Linux Institute is a non-profit organization that operates in numerous companies and is now online. 4Linux is a Brazilian firm that focuses on open-source software classes for teenagers, focusing on unique, engaging training techniques and resources.

By launching LPI testing in Brazil, 4Linux spearheaded the first campaign to offer certifications for free and open-source software to the country. They may also brag about being the world’s first firm to provide a Linux online education. They used to perform more coding, but now they only conduct testing and bug fixes. 4Linux shows interest in open source from start-ups and tech-based businesses, in addition to government.

Conclusion

Open source has shown to be not just long-lasting, but also essential to modern life. Big data, artificial intelligence, and encryption are examples of hot new software projects that are released as open-source. Even in the cloud, the majority of these cutting-edge services are open source, which consumers like since they know they can study the technology without being tied to a specific cloud vendor.

Free software is produced and maintained by the world’s largest computing corporations, including IBM, Intel, Microsoft, Oracle, and others. These businesses rely on free software to support their proprietary operations. Thousands of professionals will be able to live out their dreams as free software programmers as a result of their efforts.

However, as this essay has demonstrated, businesses may profit while sticking to open source. Many customers demand free software and will pay you to create it. Money may also be generated by supporting the open-source community and activities.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in LinuxTagged LinuxLeave a Comment on Who is Creating Businesses on the Backs of Free and Open Source Software?

The Internet Archive, Open Knowledge, and the History of Everything

Posted on March 31, 2022July 26, 2022 by Marbenz Antonio

Digital storage is both the most sensitive and the most strong media ever devised. On a hard drive, a change in the magnetization of a few tiny bits can wipe out content indefinitely. Furthermore, anyone who causes trouble on their website or through social media may easily delete the humiliating proof with a few keystrokes. However, the ability to produce digital copies at a minimal cost allows the material to be duplicated and kept in secure locations. The Internet Archive uses this second characteristic of digital material to preserve the history of the web—and more.

When the Internet Archive was founded in 1996, most individuals had only had access to the internet for a few years. Already, computer expert Brewster Kahle could see that historical material was being destroyed, so he founded the Internet Archive. The archive’s engines presently crawl roughly 750 million pages every day, with each site holding possibly hundreds or thousands of distinct web pages. The archive’s content is believed to be 552 billion web pages at the time of writing. It contains a lot more than just websites. This article looks at the Internet Archive’s accomplishments and what it has to offer both scholars and regular computer users.

Another facet of free information is online sites that provide unique content, which writers frequently use while researching pieces like this one. Wikipedia, which celebrated its 20th anniversary on January 15 of this year, is the superhero of these free sites. Although Wikipedia material is unique, it makes extensive use of references and cautions readers against using it as the main source. Furthermore, Wikipedia’s content and pictures are licensed under the Creative Commons Attribution License, the GNU Free Documentation License, or both. As a result, the information frequently appears on other websites.

Lost in the Mists of Time

The internet’s key characteristic is its ease of use. The United States of America The Supreme Court has not learned this lesson, as the justices and their staff often reference websites in their decisions. Nearly half of these links are broken, resulting in the normal 404 error response, according to researchers. That means we won’t be able to learn about the evidence used by judges to make such important decisions.

News sites, academic research, and anybody else who makes use of the web’s core feature: the simplicity with which they may link to other sites face the same risk of losing responsibility. The issue isn’t limited to sites that have gone 404. (disappeared). It also applies to sites that update their material after you’ve made a point based on the previous content. As a result, when clever commentators use other people’s site material or social media posts to make a point, they upload screenshots of the current content.

Amber, a project of Harvard’s Berkman Klein Center for Internet & Society, offers a more organized approach to archiving the past. Amber allows saving a copy of a web page while you’re viewing it simply. Amber, on the other hand, has a basic requirement: a web server on which to keep the material. The majority of us utilize third-party online services and do not have the necessary permissions to save a page. Harvard offers Perma.cc, a form of “Amber as a Service,” where anybody may store a website in its present state and create a URL that others can refer to later. Drupal.org also allows you to save pages using Amber, which is a plus. The Internet Archive maintains a copy of Perma.cc. To see how common the problem of broken links is, then searched through one of these articles, picking one that was very substantial and published four years before the study for this Internet Archive article. Only four years after having written it, my essay was published with 43 links, seven of which were broken.

The Internet Archive is a great place to start. Because they don’t throw anything away, you may access a website at any time. Let’s look at ways to get old pages back. This may be done using the Wayback Machine, an Internet Archive search interface.

Assume that one of the links on this page has become 404. The following link will take you to the content.

  1. To discover the original URL you wish to visit, look at the source code of this web page.
  2. Use the Wayback Machine to go back in time.
  3. In the search box, type the URL.
  4. The dates on which the Wayback Machine archived this page are displayed on the page retrieved by the Wayback Machine. You may access the page as it appeared on any of those dates by clicking on it. Please be patient while the site loads slowly. An archive has the luxury of waiting.

You may alternatively forgo the visual interface and manually search for the page, but this is a more complex issue that users won’t go into here. You may use the save-page-now functionality to ensure that a web page is saved in its present state. There’s also a file upload option.

More than 250 of my articles and blog posts have vanished from various websites, according to our estimates. Some articles might be recreated from preserved drafts, while others were discovered through searches in unusual locations like mailing list archives. However, the Internet Archive is certain to have them all. You recover it and post it on your website whenever you determine it’s worth keeping.

You probably don’t agree with everything on the internet, therefore you won’t agree with everything on the Internet Archive. Remember that everything people publish on the internet, no matter how offensive, might be useful to historians and scholars. To comply with material take-down legislation, the Internet Archive has a copyright policy comparable to those of social networking sites.

When evaluating this article, Brewster Kahle, the Internet Archive’s Founder and Digital Librarian, said:

The pandemic and disinformation operations have demonstrated how reliant we are on reliable and high-quality information available online. These are the functions of a library, and we are pleased to assist in any way possible.

In Praise of Brute Force Computer Algorithms

How can the Internet Archive maintain the current condition of a medium that is several orders of magnitude larger than anything that has come before it regularly?

The solution is straightforward: they apply the same brute-force strategies as search engines. The Internet Archive goes page by page through the web, trying to find everything it can. To save everything it discovers, the archive has leased huge storage space.

Programmers are always looking for new ways to avoid brute force approaches, which have an optimization level of O(n) and can only be scaled up by spending a similar amount of computing power. However, there are situations when using raw force is the best option.

Graphical processing, for example, involves reading a large amount of data about the graphic and applying algorithms to each pixel. This is why, before affordable hardware was designed to suit the specific demands of these applications, few programs could conduct graphical processing: the now-ubiquitous graphics processing unit or GPU.

Modern machine learning is another area where raw force prevails. The underlying concept dates back to 1949 when digital computing was still in its infancy. For decades, artificial intelligence experts were enthralled by the neural network, but after much research and sweat, it was branded a failure. Then processors (including GPUs) became fast enough to perform the algorithms in a reasonable length of time, and virtual computing and the cloud made compute power almost infinite. Machine learning is now being used to solve classification and categorization difficulties all around the world.

A word about limitations: web crawling misses a lot of what we view on the internet every day. The Internet Archive will not go behind paywalls, which hide a lot of journalistic and scholarly information. Because the crawler is unable to submit forms, it is unable to detect what users view on dynamically produced web pages such as those seen on retail websites.

Beyond the Web

The history of lost culture is woven into the fabric of history. The following are some of the disasters that we still mourn:

  • After Spain defeated the Mayans in Central America in the 1500s, a single Spanish bishop ordered the destruction of all Mayan cultural and religious documents. The few codices that have survived reflect a complex philosophical investigation that we will never be able to fully understand.
  • Invading Mongols burned Baghdad’s library in 1258, an act of gratuitous hedonism that accompanied their conquest of the city. This shattered a fruitful legacy on which medieval Europe’s intellectual revival was built.
  • The destruction of Alexandria’s old library appears to have occurred over several centuries. The Internet Archive was founded as a result of Kahle’s inspiration from this resource.

Add to these tragic events the destruction of ancient architecture (often dismantled by local residents looking for cheap building materials), the extinction of entire languages (each losing not only a culture but also a unique worldview), and the disappearance of poems and plays by Sappho, Sophocles, and others that shaped modern literature.

Many megabytes of data were entrenched in corporate data centers long before the internet. Their owners must have recognized that when organizations transitioned to new computers, databases, and formats, data may be lost. Customers are caught with the material in opaque and proprietary forms when software suppliers go out of business. People today have priceless memories stored on tangible media for which there are few technologies available. As a result, our data is slipping from our grasp.

Although the Internet Archive’s terms of service emphasize their importance to scholars, they provide fantastic tools that anybody may access. They have a book lending service that looks to be similar to what is available now at other libraries. They provide a section for youngsters with instructional materials, as well as unique repositories for music, photos, videos, video games, and historic radio broadcasts.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in LinuxTagged LinuxLeave a Comment on The Internet Archive, Open Knowledge, and the History of Everything

Linux Professional Institute Releases Web Development Essentials

Posted on March 31, 2022July 26, 2022 by Marbenz Antonio

The Web Development Essentials training is now available from the Linux Professional Institute (LPI). The curriculum gives students an overview of web-based software development. The program consists of learning goals, Learning Materials, a test, and a certificate granted upon successful completion of the exam.

Learners who are just getting started with software development can benefit from Web Development Essentials. It’s meant to be taught in a one-semester class or something similar. The program’s material covers the core ideas needed to create web-based apps. HTML, CSS, JavaScript, Node.js, and SQL are all included. At a fundamental level, all of these technologies are covered. The course is designed to cover a sufficient quantity of information in a sufficient length of time so that the student can grasp the fundamental concepts of web development and apply the necessary technology to simple projects. Taken together, the program’s curriculum enables students to create a small web application on their own.

“The goal of Web Development Essentials is to provide a basic understanding of software development. It covers all of the fundamentals, but with just enough content to get started constructing a small app right away,” explains Fabian Thorns, LPI’s Director of Product Development. Thorns says, “The combination of learning goals, Learning Materials, a test, and a certificate is a comprehensive package that gives both learners and teachers everything they need to get started.”

“The objective of LPI is to assist everyone working with open technology.” Software development is an important aspect of professional IT and one of the most visible aspects of open source technology. We give an introduction to software development using an open-source stack that is available to anybody on any platform with Web Development Essentials,” explains Matthew Rice, Executive Director of LPI.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in LinuxTagged LinuxLeave a Comment on Linux Professional Institute Releases Web Development Essentials

What’s the difference between Linux and Windows in 2022?

Posted on March 16, 2022July 26, 2022 by Marbenz Antonio

Users who want to try something new or are weary with their Mac OS or Windows operating systems might consider switching today. The Mac OS system presently employs a UNIX core, making the move from Mac OS to Linux very painless. Users of Windows, on the other hand, will need to make some changes.

The Linux operating system will be compared to Microsoft Windows in the following tutorial.

Microsoft Windows vs. Linux File System

Different data disks are used to store Microsoft Windows files (C: D: E:). On Linux, files are organized in a tree structure, starting with the root directory. This is the first directory in the file system. It then branches off into some different subdirectories. A forward-slash (/) is used to denote the root directory.

Key Differences

  • Linux, as an open-source operating system, may alter source code as needed, but Windows OS, as a commercial operating system, does not have access to source code.
  • Because of its excellent security, Linux can find problems and repair them more quickly, whereas Windows’ enormous userbase is vulnerable to hacking.
  • Windows is sluggish, particularly on older technology, whereas Linux is much quicker.
  • Printers, CD-ROMs, and hard drives are all considered devices in Windows operating systems. Printers, CD-ROMs, and hard drives are all examples of Linux peripherals.
  • To store files, Windows employs data disks (C: D: E:) and folders. To keep data organized, Linux utilizes a tree structure that starts with the root directory.
  • In Linux, two files with the same name can exist in the same directory. Users in Windows cannot have two files with the same name in the same folder.
  • Program and system files are nearly always saved in the C: drive in Microsoft Windows, however, program and system files in Linux might be located in many folders.

File Types

In UNIX and Linux, anything is considered a file. Files are files, directories are files, and the keyboard, mouse, and printer are all files.

General Files

General Files, also known as Ordinary Files, can include text, applications, movies, and photos. These files, which are the most widely used on Linux, can be in Binary or ASCII format.

Directory Files

Directory Files provide a storage facility for several sorts of files. A subcategory is available to users (a directory within a directory). Files can also be viewed as folders within the Microsoft Windows operating system of the user.

Device Files

In Windows, devices such as hard disks, CD-ROMs, and printers utilize drive letters such as H: or G: For example, if the first SATA hard disk has three primary partitions, they would be numbered and labeled /dev/sda1, /dev/sda2, and /dev/sda3. It’s worth noting that the /dev/ directory contains all of the device files.

Users have access to execute (run), edit, or read any file types, including devices, making this one of Linux’s most powerful features. Permissions can be adjusted to apply different sorts of access restrictions to different categories of users.

Windows User vs. Linux User

There are three different sorts of Linux users:

  • Regular Users
  • (Root) Administrative Users
  • Service Users

Regular Users

When a user installs Ubuntu on their machine, regular user accounts are established. The home directory, /home/, contains all folders and files. Other user folders are not accessible to regular users.

Administrative (Root) Users

When Ubuntu is installed, a secondary user account known as a root account is established in addition to a normal account. This is a superuser account that allows users to control who may install applications and access files. To do administrative operations, install software, or make changes to system files, a user would log in as a root user. A user’s normal account can be used to surf the internet or listen to music.

Service Users

As a server operating system, Linux is well-known. Squid, Apache, and e-mail are examples of services that have their service accounts. Service accounts improve the security of a user’s machine. Linux can either deny or allow access to various resources depending on the service.

  • In the desktop version of Ubuntu, service accounts will not be visible.
  • Regular accounts are referred to as standard accounts in Ubuntu Desktop.

In Windows, there are four different types of user accounts:

  • Administrator
  • Standard
  • Child
  • Guest

File Name Conventions in Windows and Linux

In Windows, a user is not authorized to save two files with the same name in the same folder (see example below).

In Linux, on the other hand, two files with the same name can exist in the same directory as long as they use distinct cases.

Home Directories in Windows and Linux

Each user’s /home/ directory is generated in Linux. The main directory (e.g. /home/tom) is where users can keep their folders and files. Users are not allowed to store files outside of their user directory, and they are not allowed to access other people’s folders. Users, for example, cannot access a directory belonging to Jerry (/home/jerry) if they do not own that directory. This concept is analogous to Microsoft Windows’ C: Documents and Settings functionality.

The user directory is the default working directory when a user boots the Linux operating system (for example, /home/tom). The /home/tom directory is sometimes referred to as the Home directory, which is a mistake.

To change the working directory, you may use a few commands, which we’ll go over in more detail later.

Other Directories in Windows and Linux

Windows always save program and system files to the C: disk. The program and system files in Linux are separated into two folders. The boot files are in the /boot directory, whereas the software and program files are in the /dev directory beneath the /bin device files.

The image below shows key Linux folders, as well as a brief description of what they contain.

The following are the primary differences between the Linux and Windows operating systems. When switching from Windows to Linux, users may notice additional differences, which will be covered in greater depth in subsequent lectures.

Differences Between Windows and Linux

Windows:

  • Stores folders and files on many data disks (C: D: E:)
  • Has various drives (C: D: E:)
  • Printers, CD-ROMs, and hard drives are all considered devices.
  • Guest, Child, Standard, and Administrator are the four distinct sorts of users.
  • The administrator is a user with administrative rights.
  • Users cannot store two files with the same name in the same folder.
  • My Documents contains the default home directory.

Linux:

  • Uses a hierarchical file system that looks like a tree.
  • There are no drives on this computer.
  • Files include printers, hard disks, and CD-ROMs (peripherals).
  • There are three sorts of users: Service Account, Root, and Regular.
  • Root users have administrative rights and are superusers.
  • Case matters in file names (for example, on the Linux/Unix operating system, SAMPLE and sample are two separate files).
  • Each user has their own /home/ username directory, which serves as their home directory.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in LinuxTagged LinuxLeave a Comment on What’s the difference between Linux and Windows in 2022?

How to Handle with Third-Party Programming Library Vulnerabilities

Posted on March 3, 2022July 26, 2022 by Marbenz Antonio

Almost every piece of software uses several layers of third-party libraries. Consider the case when a Java program uses a standard library method to prepare a date. That function could in turn call a calendar-related function from another library. And then another function is called, and so on.

What if one of those highly nested libraries has a security weakness that is made public? Your application is now vulnerable, and a malicious attacker can gain access to the server where it is running—even if you didn’t introduce a problem yourself.

There are several scanners available to assist you in finding vulnerabilities in dependencies, but dealing with them requires some finesse. In this blog, we’ll look at the procedure.

Information about Flaws from Various Sources

We’re all secured by a large network of security specialists who run software through a variety of difficult tests to find and disclose hazardous bugs. Their testing might be as simple as passing unusual data to a function to see whether it becomes confused and allows an attacker to take control of the software. Fuzzing is a fascinating subject that involves sending massive amounts of randomly generated characters to programs to uncover faults and vulnerabilities. Other thorough analysis tools hunt for suspicious issues in either static code (static analysis) or a running program (running analysis) (dynamic analysis).

Of course, less well-intentioned researchers are also on the lookout for similar holes to create nasty attacks for government and ransomware customers. Although zero-day exploits (security holes not yet known to the public) are harmful, the majority of assaults rely on defects that are well known and that victims have let to remain on their systems. You can be confident that hostile actors are perusing the publicly available flaw lists.

Before a critical stage in the life cycle, such as quality assurance or deployment, at the absolute least, execute an automated vulnerability check. You don’t want to enter a critical point of the life cycle with a vulnerability since the cost of correcting it will be substantially higher.

Regularly running vulnerability scanners is an important aspect of DevSecOps, a popular approach that incorporates security into the application life cycle. Scans that follow the Security Content Automation Protocol are required by several regulatory contexts, including the CIA and FBI (SCAP). NIST created SCAP, which is now available as an open-source project named Open SCAP.

Easy Fixes

You’ve discovered a weakness! Hopefully, the solution will be simple and painless. If the package’s creators have issued a new version that includes the patch, all you have to do now is rebuild your application using the updated version. Naturally, every modification to a package might lead to new issues, so you should perform your regression tests following the update.

Project Thoth, an open-source tool developed by Red Hat for locating safe libraries for Python applications, represents one advanced trend in software builds. Thoth doesn’t just bring in the most recent stable version of each library your program utilizes; it searches a variety of public databases and tries to offer a package combination that works flawlessly together. Developers in other programming languages are taking the same approach.

If there isn’t a new version yet that fixes the software error, you might be able to identify an older version of the library that doesn’t have the bug. Of course, if the old version contains other flaws, it will be of little use to you. If the previous version appears to fit your needs, you must ensure that you are not reliant on the functionality included in newer versions, and you must perform your regression tests once more.

Identifying the Context of a Flaw

Assume that the solutions described in the preceding section are not accessible. You’re trapped using a library that has a known security weakness in it to create your software. Now we’ll need to do some more in-depth investigation and reasoning.

Examine the factors that might lead to a security breach. Many exploits are theoretical at the time they are reported by security researchers, but they can swiftly become actual. So study the vulnerability report to learn what an attacker needs to become a threat. Is it necessary for them to have physical access to your system? Do they need to be root (superuser)? Once they’ve gained root, they’re unlikely to need to take advantage of your fault to cause havoc. You may conclude that an attacker is unlikely to be able to use the vulnerability in your environment.

Some automated vulnerability scanners have a high level of openness. They may highlight something as an issue, but you may determine it isn’t in your situation.

You may be able to add additional tests to ensure that the bug isn’t exploited. Assume one of the arguments supplied to the vulnerable function is the buffer length; the attack will only be dangerous if that parameter is negative. A buffer’s length should, of course, always be zero or positive. With a negative value in that parameter, your software will never validly call the function. You may improve security by including this before each function call:

if (argument < 0)
exit;

Other vulnerabilities work by introducing characters that should never be used invalid input, allowing you to check for them before providing data to functions. Some languages, based on a Perl innovation from many years ago, label hazardous variables as “tainted” so you know you’ve examined them for security flaws.

Instead of embedding such checks throughout the program, it could be easier to incorporate a check for unsafe input in an application proxy or other wrapper.

This solution should only be temporary, as the library’s maintainers should resolve the defect as soon as possible.

If no one has already shared this workaround with the community, leave a comment on the issue where the problem was discovered and give your solution to others.

You could discover, by the way, that the disclosed defect affects a function you’re not using. However, be aware since you could call a library function that indirectly calls the unsafe function. There are tracing and profiling tools that allow you to examine your application’s whole tree of function calls to discover whether you’re in danger.

Maybe you’re just right in the sights of attackers because you’re employing a function with a fault you can’t fix. So think about it: do you need the feature with the flaw? Alternative libraries with equivalent functionalities are frequently available. Alternatively, the function’s specific usage may be simple enough for you to create it yourself. Writing your version of the function, on the other hand, is a terrible approach since you’re more likely to add issues than to solve the issue. After all, you’re even less experienced with secure code than the library’s maintainers.

You could also feel secure enough in your coding abilities and knowledgeable enough with the package you’re using to contribute a bug fix. This is just an option for open source software, but certainly, hope you’re utilizing them wherever possible.

But don’t want to leave you without reiterating the importance of defending in depth. If your app is only for internal use, for example, firewall restrictions and authentication should guarantee that you’re only talking with valid users. On the other hand, even an internal user might be hostile, or an outsider could use them as a stepping stone into your server. As a result, a secure application is still required.

Conclusion

Security issues are a continual danger in software development, and they may be found anywhere. You need to be aware of such flaws since a breach in your system might result in a ransomware attack, the loss of valuable client data, the use of your system in a botnet, or other unpleasant outcomes. Do not allow yourself to become a victim.

However, because there are so many problems, you can’t just decide to discard every library for which a vulnerability has been detected. If an upgrade is available, apply it as soon as possible. In all other circumstances, I hope this information has aided you in making sound security judgments.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in LinuxTagged LinuxLeave a Comment on How to Handle with Third-Party Programming Library Vulnerabilities

Is it Worth It to Get a Linux Certification?

Posted on January 14, 2022July 26, 2022 by Marbenz Antonio

Linux is a must-have in today’s workplace and cloud computing environments. The days of early adopters risking their careers by implementing a Linux system rather than a well-established UNIX or Microsoft Windows server are long gone. Admins now have a variety of enterprise-ready Linux server distributions to choose from, including Ubuntu, Red Hat, SUSE, Kali, and others.

Linux is found everywhere, not just in data centers. Both Amazon Web Services (AWS) and Google Cloud rely heavily on Linux to power their systems, and both provide a large range of Linux machine cases. Professionals in the field of cybersecurity choose Linux for penetration testing and computer hacking.

With such a strong focus on Linux, it’s no surprise that many IT professionals are pursuing Linux certifications. Are Linux credentials worthwhile, and if so, which ones should you pursue? We’ll delve deeper into the topic of Linux certifications and their worth.

What Linux Certifications Exist?

There are two types of Linux certifications: those that are independent of the Linux distribution and those that are tied to a specific distribution or vendor-specific version. The Linux Professional Institute (LPI), CompTIA, and the Linux Foundation all offer independent certifications.

For Linux Administrators (LPIC-1), Linux Engineers (LPIC-2), and Linux Enterprise Professionals (LPIC-3), LPI offers well-known certifications (LPIC-3). CompTIA offers a CompTIA Linux+ sysadmin certification. The Linux Foundation offers certifications for Linux engineers (LFCE) and sysadmins (LFCS) (LFCE).

On the product side, we find certifications from Linux-centric firms, mostly for administrators, engineers, and architects, such as:

  • Red Hat: sysadmin (RHCSA), engineer (RHCE), and architect (RHCA),
  • SUSE: sysadmin (SCA), engineer (SCE), an architect (SEA), and
  • Oracle: Linux 5 & 6 sysadmin (OCA), and Linux 6 sysadmin professional (OCP).

There are several interesting credentials in the super-hot cybersecurity field, which is a minor detour away from core Linux. The GIAC Certified UNIX System Administrator (GCUX) certification, for example, focuses on protecting and auditing Linux and UNIX systems.

For cybersecurity professionals, there are the Kali Linux Certified Professional and Offensive Security Certified Professional (OSCP) certifications.

Take a look at our Complete Open Source Certification Guide if you want to learn more about Linux/Open Source certifications.

What Are the Job Prospects for Linux Professionals?

According to a 2018 Linux Foundation research, there is a high need for IT employees with Linux capabilities. A job search on Indeed.com in December 2019 revealed over 60,000 job listings in the United States that included Linux abilities. Is there, however, a similar need for Linux certifications as a result of this trend?

Our initial excitement is tempered when we go further into the 60,000 Linux job postings. Searches for LPI and Linux Foundation certifications gave a total of less than 250 job opportunities that needed those qualifications. Similarly, Oracle and SUSE certification searches produced nothing.

Only Red Hat certificates (RHCSA: 550, RHCE: 720, RHCA: 115) have a greater number of certified chances, although even these are a small proportion of total Linux possibilities.

Why are Linux certifications required for such a small percentage of Linux jobs? Hiring businesses prefer peer-level interviews to certify a candidate’s Linux competency, according to information provided on numerous internet forums.

Employers appear to be willing to pay higher salaries for Red Hat-certified professionals, according to our Complete Open Source Certification Guide, whereas “generic Linux” certs (LPI, Linux Foundation, and CompTIA) are on par with other business certifications like Microsoft’s MCSA; however, certificate holders earn nearly $4,000 more per year than non-certified professionals in these cases.

MCSAs and admins with generic Linux credentials earn an average of $74,000, whereas Red Hat Certified Sysadmins (RHCSA) earn an average of $86,000 or more. A Red Hat Certified Engineer (RHCE) earns an average of $22,000 per year more than their LPIC-2-certified peers at the following certification level.

Should You Take a Linux Certification?

Linux abilities are certainly in high demand. However, few Linux job advertisements expressly demand a Linux certification, with the majority emphasizing prior knowledge of the operating system. Given the pay differences between certified and non-certified workers, a Linux certification will most likely be viewed as a plus in the recruiting process rather than a must.

If you work with Red Hat Linux or want to, there is little doubt that you should pursue Red Hat certification. Before becoming a qualified Red Hat engineer, start as a certified Red Hat administrator (RHCSA) (RHCE). After that, you can take the final step toward becoming a licensed architect (RHCA).

If you know you’ll be working with Oracle Linux, you should take the Oracle OCA and afterward the Oracle OCP in Linux. However, only do this if you’re stuck in an Oracle Linux job. SUSE Linux and SUSE certificates have a similar story.

It’s a different scenario with generic certificates (LPI, Linux Foundation, and CompTIA). They have a considerably broader use since they are not dependent on any particular distribution. Employers will look favorably on each of these certifications if you have real-world Linux expertise and can demonstrate it to a Linux competitor.

You should probably go with the LPI option out of the three. That’s because, unlike the Linux Foundation, it includes a professional growth path with stages for admin, engineer, and architect. Any of these certifications can also help you further your career if you chose to work in Amazon or Google Cloud.

We already discussed cybersecurity. Because specialist Linux distros like Kali are favored platforms for penetration testing, this might be a useful next step after your fundamental Linux certifications.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Posted in LinuxTagged LinuxLeave a Comment on Is it Worth It to Get a Linux Certification?

Posts navigation

Older posts

Archives

  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • March 2020
  • December 1969

Categories

  • Agile
  • APMG
  • Business
  • Change Management
  • Cisco
  • Citrix
  • Cloud Software
  • Collaborizza
  • Cybersecurity
  • Development
  • DevOps
  • Generic
  • IBM
  • ITIL 4
  • JavaScript
  • Lean Six Sigma
    • Lean
  • Linux
  • Microsoft
  • Online Training
  • Oracle
  • Partnerships
  • Phyton
  • PRINCE2
  • Professional IT Development
  • Project Management
  • Red Hat
  • SAFe
  • Salesforce
  • SAP
  • Scrum
  • Selenium
  • SIP
  • Six Sigma
  • Tableau
  • Technology
  • TOGAF
  • Training Programmes
  • Uncategorized
  • VMware
  • Zero Trust

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

home courses services managed learning about us enquire corporate responsibility privacy disclaimer

Our Clients

Our clients have included prestigious national organisations such as Oxford University Press, multi-national private corporations such as JP Morgan and HSBC, as well as public sector institutions such as the Department of Defence and the Department of Health.

Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
  • Level 14, 380 St Kilda Road, St Kilda, Melbourne, Victoria Australia 3004
  • Level 4, 45 Queen Street, Auckland, 1010, New Zealand
  • International House. 142 Cromwell Road, London SW7 4EF. United Kingdom
  • Rooms 1318-20 Hollywood Plaza. 610 Nathan Road. Mongkok Kowloon, Hong Kong
  • © 2020 CourseMonster®
Log In Register Reset your possword
Lost Password?
Already have an account? Log In
Please enter your username or email address. You will receive a link to create a new password via email.
If you do not receive this email, please check your spam folder or contact us for assistance.