• Courses
    • Oracle
    • Red Hat
    • IBM
    • ITIL
    • PRINCE2
    • Six Sigma
    • Microsoft
    • TOGAF
    • Agile
    • Linux
    • All Brands
  • Services
    • Vendor Managed Learning
    • Onsite Training
    • Training Subscription
  • Managed Learning
  • About Us
    • Contact Us
    • Our Team
    • FAQ
  • Enquire

OUR BLOG


Tag: Oracle

What are the benefits of using graph analytics?

Posted on May 18, 2022May 31, 2022 by Marbenz Antonio

The Future of 4G and Connecting Rural America: Data Center POST Interview with James Taylor, Director of Carrier Sales, Bluebird Network | Data Center POST

Data is growing at an exponential rate, and increasing automation is creating a flood of data from smartphones, mobile, and IoT devices, security systems, satellite imaging, cars, and other sources. The challenge then arises, how can we rapidly obtain useful insights from ever-growing data sets of various types and sources?

Graph technology allows developers and analysts to easily get significant insights by exploring relationships and discovering connections in data. Much of the world’s data is truly linked, including financial transactions, personal and professional networks, manufacturing supply lines, and so on. Graphs show those connections instantly.

Here’s an example of how data sets are linked and how complicated analysis might become as a result of data connections.

Graph data model, data connections, data relationships

Graph data platforms enable developers to work more efficiently. They provide the performance and scalability required for big installations, while improved query and search capabilities simplify access and reduce time to insights for connected-data use cases.

Oracle makes it simple to implement graph technology. Oracle’s converged database includes graphs, which allow multi-model, multi-workload, and multi-tenant needs in a single database engine. The strength of built-in graph algorithms, pattern matching queries, and visualization in Oracle Database and Oracle Autonomous Database enable analysts to swiftly find new insights. Graph analytics may be readily added to existing systems by making use of Oracle Database’s performance, scalability, reliability, and security.

Oracle is recognized as a leader in graph data platforms

Forrester evaluated graph data systems using 27 criteria. Graph model/engine, deployment options, cloud, app development, API/extensibility, data loading/ingestion, data management, transactions, queries/search, analytics, visualization, high availability and disaster recovery, scalability, performance, data security, workloads, and use cases are some of the main criteria.

autonomous database graph offering, graph database, graph technology, data connections, data relationships

Why graph technologies from Oracle?

The following are key features of the Oracle offering:

  1. Complete graph database that supports both property graphs and RDF knowledge graphs. Oracle Database makes it easier to describe relational data as graph structures. It simplifies the identification, processing, and visualization of linked data sets on-premises or in the cloud.
  2. Enterprise-level scalability and security. Interactive graph queries can be executed directly on graph data or on a high-performance in-memory graph server, which can handle millions of concurrent users and queries per second. Customers benefit from fine-grained security, high availability, simple management, and connection with other data sources in business applications. Oracle offers advanced multilevel access control for property graph vertices and edges, as well as RDF triples. Oracle also complies with ISO and W3C standards for representing and designing graphs and graph query languages.
  3. Comprehensive graph analytics to analyze relationships using over 60 prebuilt algorithms.  To generate, query, and analyze graphs, analysts can utilize SQL, native graph languages, JAVA, and Python APIs, as well as Oracle Autonomous Database capabilities. They may quickly display relationships in data to identify insights, and then publish and share analytical results using interactive analytics and visualization tools. Graph Studio in Autonomous Database allows practically anybody to get started with graphs to investigate data relationships. Graph Studio streamlines the graph analytics lifecycle by automating graph data administration and simplifies modeling, analysis, and visualization.

Graph analytics use cases

Money laundering detection in financial services: Users may develop a graph from transactions between entities as well as entities that share certain information, such as email addresses, passwords, and more, to make fraud detection easier. A single query will uncover all consumers with accounts that have comparable information, expose which accounts are participating in circular payment schemes, and detect patterns used to continue fraud after a graph has been formed.

Traceability in manufacturing: Traceability is extremely important in the manufacturing industry. A car manufacturer may have to recall a vehicle because it has a component that was produced at a facility within a specified time period. Most businesses have a production database, a retail database, a sales database, and a shipping database. Unless the business has a graph database to link all the relationships and graph algorithms to highlight connections and essential information, it is difficult to uncover all the relevant information to locate the automobiles with the problem, where they were delivered, and to whom they were sold.

Criminal investigation: Graphing data is a simple and efficient technique to detect criminal networks and search for trends. The use of graph-based algorithms makes it simpler to pinpoint specific places, highlight co-traveler links, and identify significant suspects and criminal groups. Users, for example, can determine the “weakest link,” or the vertex on which the graph is dependent, by using betweenness centrality. If you delete that vertex, the entire network may collapse, indicating that you have just discovered the linchpin of a criminal organization.

Data regulation and privacy: A graph is ideal for tracking data lineage. By tracing the edges, the many stages of the data lifecycle may be followed and browsed vertex by vertex. With a graph, it is able to trace a path and determine where the information originated, where it was copied, and where it was used. With all of this information presented in a graph, data professionals may more easily assess how to meet GDPR requirements while remaining compliant.

Product recommendations in marketing: Graph databases capture all data and connect it to deliver real-time suggestions based on client demands and product trends. Many major organizations rely on graph analytics to generate product recommendations since the linkages are already defined and the analysis of these relationships to provide suggestions is quick. Graph analysis may also uncover trends that expose trolls, bots, falsely boosted reviews, and information that may bias marketing analysis.

Summary

Graph technology facilitates the exploration of relationships and the discovery of connections in data. Pattern recognition, classification, statistical analysis, and machine learning may be applied to these models by users, including non-technical ones, allowing for more efficient analysis at scale against huge amounts of data.

Posted in OracleTagged OracleLeave a Comment on What are the benefits of using graph analytics?

Why do Autonomous Database applications perform better than PostgreSQL on AWS?

Posted on May 18, 2022May 31, 2022 by Marbenz Antonio

The Types of Databases (with Examples)

In today’s customer-centric world, organizations find it difficult to grow their user base while meeting their customers’ speed, agility, and personalization needs—but that’s exactly what LogFire was able to do when it switched to Oracle Autonomous Database, a next-generation cloud platform with a fully automated database. LogFire raised order processing rates by 55% in a year while expanding its customer base by 2x. LogFire also decreased its TCO by utilizing Autonomous Database auto-scaling technology, going from ten external contractors administering the database to zero—the database automates all manual activities. LogFire now processes approximately 5 billion packets each year and is trusted by hundreds of clients worldwide. LogFire was also able to swiftly construct more features and customizations with Autonomous Database to grow into 10 new industries.

This blog will discuss how LogFire (now branded as Oracle Warehouse Management Cloud) migrated its cloud-native, mission-critical SaaS application to Oracle Cloud Infrastructure (OCI) and migrated all of its AWS PostgreSQL databases to Oracle Autonomous Transaction Processing (which also runs on OCI) for improved performance, elasticity, security, and TCO.

Oracle Warehouse Management Cloud, Born from LogFire

Oracle Warehouse Management Cloud is a leading cloud-based inventory and warehouse management service that assists organizations in automating numerous supply chain processes. LogFire, a pioneer in introducing cloud to supply chain management, was created in 2007 by Diego Pantoja-Navajas and was acquired by Oracle in 2016 as the foundation for Oracle WMS. “With the goal of revolutionizing supply chains by assisting organizations in adopting cloud technologies, we designed a standalone SaaS application that connected different systems and data to remove silos and make supply chains more efficient,” Diego explained during our chat. We were always aware that we were creating a mission-critical application for the heart and lungs. Because so many different types of enterprises would rely on it, it had to be bulletproof.”

Issues with Amazon Relational Database Service (RDS)

LogFire’s application used current cloud-native development approaches, with production databases on Rackspace running PostgreSQL. Arun Murugan, VP WMS Product Development, stated that they transferred their PostgreSQL test databases to Amazon RDS to save money, but maintained production on Rackspace and outsourced administration because LogFire did not have an in-house DBA staff. “While RDS eased scalability, it had performance and reliability issues that made it unsuitable for our production databases.” These issues included “no performance guarantees, manual tweaking of a huge number of database settings to maintain performance levels, no automatic bi-directional database replication, security vulnerabilities, patching and maintenance downtimes, and more,” added Arun.

LogFire’s architecture before moving to Oracle Cloud Infrastructure

Why LogFire Moved their Applications and Databases to Oracle Cloud Infrastructure

The supply chain, like any other business, is undergoing massive changes. The great toilet paper crisis of 2020, which coincided with the COVID-19 pandemic, showed another gap in the global supply chain. Social media exposure, the growing emphasis on customer experience, sustainability, ethical vendor conduct, and the circular economy have all had a significant influence on changing the supply chain from a back-office function to a more strategic differentiator. As a result, the importance of the supply chain was increased across sectors, and LogFire experienced a boom in demand that put a lot of strain on its previous design. LogFire prioritized innovation and modernization of its supply chain for its customers:

  • They were unable to provide new features at the rate demanded by their customers.
  • They were spending a growing amount of time manually maintaining their databases to assure their clients’ 24/7 availability.
  • Most importantly, Rackspace’s production PostgreSQL databases were unable to scale up and down as quickly in response to seasonal peaks in demand as well as the yearly growth in shipment quantities of its retail customers. Despite the fact that their program could scale dynamically, scaling and upgrading the databases required downtime, making the databases inaccessible and slowing down the whole process. They had to prepare ahead of time and acquire more Rackspace servers before each seasonal peak, then manually scale down at the conclusion of the season. Scaling on AWS also necessitated downtime.

Because these basic difficulties were impeding its expansion, LogFire began examining other possibilities that may meet its business objectives. After analyzing all of the benefits, they decided to transfer their application and databases to OCI in the summer of 2019. They converted all 700 PostgreSQL databases to Autonomous Transaction Processing, an Oracle Autonomous Database optimized for OLTP, after a comprehensive evaluation of several possibilities for operating PostgreSQL databases on OCI, or Oracle Autonomous Database.

Oracle Warehouse Management Cloud architecture on Oracle Cloud Infrastructure

Seamless Migration Process

The conversion of LogFire’s application and database to OCI and Autonomous Database took slightly over 7 months. If not for the delays to minimize any inconveniences for its retail clients during their seasonal peaks, the full move would have taken less than three months. They had switched their whole application stack and all of their clients on OCI by February 2020. The following criteria helped all 700 databases migrate successfully.

  1. LogFire’s application was created with a modern cloud-native REST services-based architecture. Just because of that, moving to OCI was as simple as raising and shifting the application code to Oracle’s cloud-native services without significant re-architecting or rewriting of the code. They used API calls to connect to a number of non-standard systems to automate package selecting, packaging, sorting, and shipping. Moving their 700 PostgreSQL databases was the same as changing the core database technology without the need for additional integrations.
  2. The Django object-relational mapper (ORM) is used in LogFire’s application architecture to create a database separation layer, allowing queries to be database-agnostic. This meant that 75 % of the SQL queries created by the ORM could be sent to Oracle “as is.” Only 25% of the hand-coded queries required some level of tweaking for Oracle Autonomous Transaction Processing.
  3. Because PostgreSQL and Autonomous Transaction Processing are both SQL-based, LogFire developers and users had a relatively low learning curve because ATP automates the majority of the operations.

How the Move to ATP from PostgresSQL Propelled LogFire’s Growth

55% increase in order processing speed: Warehouse management systems are real-time transaction systems in which order delivery is directly affected by performance and availability. Several systems, robots, and machines make continual API calls to the database to complete a single order. As order volumes grow, database failures or slower response times have an impact on income and, eventually, reputation. Only Autonomous Transaction Processing could meet these high-performance standards. However, switching to OCI raised LogFire’s clients’ order processing rates by 55%, while goods sent increased by 45% year over year.

50% increase in warehouse users: Diego stated that after migrating to OCI, LogFire was able to grow from five industries to fifteen, expanding its customer base and increasing warehouse users by 50% in just one year. Because their workers had access to all data and could focus on development rather than managing the database and supporting infrastructure, they were able to quickly introduce new features and modifications for these new sectors. Autonomous Transaction Processing allowed this expansion by enabling auto-scaling at the press of a button, even during seasonal peaks when LogFire’s customers doubled their users.

In fact, during the start of the pandemic, most of their clients’ shipment volumes exceeded their seasonal peaks. LogFire was also able to satisfy the unanticipated rise in demand efficiently and without disturbance. The auto-scaling functionality of the Autonomous Database may automatically scale up or scale down the number of CPUs based on workload needs, resulting in cost savings through real pay-per-use.

TCO Reduction: LogFire decreased its ownership costs by transitioning from a remote database management service with 10 contractors continually monitoring all of their databases to a completely automated system. Autonomous Transaction Processing automated all of its processes, allowing for error-free 24/7 monitoring and avoiding continuing maintenance costs for patching, protecting, and backing up the databases.

Zero Downtime: PostgreSQL databases hosted on AWS or Rackspace required scheduled downtime for patching, upgrading, and maintenance. Autonomous Transaction Processing grows dynamically while transactions are in flight, automatically installs updates with no downtime, and does not require planned maintenance. It generates and manages multitenant environments automatically, manages clusters, and implements Oracle Maximum Availability Architecture (MAA). The Autonomous Data Guard (Auto-DG) function, which may be activated with a single click, automatically produces a standby duplicate of the active database.

In the unlikely event of an interruption or outage, the database layer can instantly transition to the standby database. Rather than waiting for the primary database to recover, application transactions can continue to run without interruption on the promoted backup database. Transparent Application Continuity improves the availability of the database that is already running on Oracle Real Application Clusters (RAC). These features enhanced LogFire’s uptime SLA, assuring zero downtime.

Enhance Security: LogFire improved its security posture after switching away from PostgreSQL. Their data was secured only during transmission using AWS RDS for PostgreSQL. Transparent Data Encryption (TDE) is used by Autonomous Transaction Processing to encrypt data both at rest and in motion. It uses machine learning to automate the detection of security risks. It also offers more granular security to do assessments, disguise sensitive data, and audit actions in order to secure customer data. Security fixes are also applied automatically.

Machine Learning-based automation: By exploiting the in-database machine learning techniques available with Autonomous Transaction Processing, LogFire was also able to distinguish and deliver unique value to its clients. They used over 150 distinct AI/ML algorithms without the help of any data scientists. The following are a few AI/ML algorithms that they implemented:

  • Predicting peak hours in distribution centers
  • Calculating average picking times and expected task time
  • Replenishment (automatically initiating efficient replenishment tasks)
  • Calculating expected profit from each order fulfilled or average revenue generated per order
  • Predicting future target values
  • Creating predictive pick orders based on historical data
  • Automatically assigning order fulfillment priority
  • Clustering employees who work well together to optimize efficiency
  • Warehouse slotting
  • Spotting worker efficiencies
Posted in OracleTagged OracleLeave a Comment on Why do Autonomous Database applications perform better than PostgreSQL on AWS?

Move your Oracle Database Standard Edition to Oracle Autonomous Database to Save Money and get more Benefits

Posted on May 18, 2022May 31, 2022 by Marbenz Antonio

9 Alternatives to Oracle Standard Edition - Pebble IT Australia

As cloud technology advances to enable business-critical applications for all types and sizes of enterprises, the migration of data, databases, and apps from on-premises settings to the cloud is quickening. Many of our customers, like you, have begun to make this transition and are evaluating their choices for moving their on-premises Oracle Database workloads to the cloud. There are several variables to consider before starting on your cloud journey.

Many organizations that run Oracle Database Standard Edition on-premises for OLTP, analytical, or mixed workloads are thinking about upgrading to a fully managed Oracle Autonomous Database in Oracle Cloud Infrastructure (OCI) or consolidating many databases in their data centers using Autonomous Database on Exadata Cloud@Customer. Let’s look at the commercial and technological benefits in this scenario.

Why Upgrade to Oracle Autonomous Database?

There are some convincing reasons to migrate your Oracle Database Standard Edition on-premises to Autonomous Database in the cloud:

  • Autonomous operations: Autonomous Database, being a fully managed service, removes the huge majority of labor-intensive manual DBA activities. It optimizes performance and availability by using machine learning and decades of production-proven best practices. Autonomous Database provides a completely self-contained experience that includes automated database tuning, scaling, patching, and security features. As a result, Autonomous Database does not need substantial database administrator and infrastructure management knowledge, lowering operating expenses and freeing up resources for more vital business goals. Autonomous features built within new products and services, such as Oracle APEX and AutoML, improve application development cycles and reduce time-to-first-revenue.
  • Advance features: 
    • Moving to Autonomous Database provides industry-first features that are designed to tackle persistent user problems that need expert-level administrators doing manual database diagnostics, performance tuning, capacity planning, manual backups, and security tasks. These and other issues are clearly stated by Autonomous Database features that are not accessible with Oracle Database Standard Edition. Instead, machine learning optimizes database performance in real-time, ensures patches are applied correctly and without disruption, detects and remediates unauthorized intruders as soon as they attempt to access your data, and scales performance and capacity up and down based on demand—all without disrupting operations.
    • That isn’t everything. The Oracle Autonomous Database contains all of the features and capabilities of the Oracle Database Enterprise Edition, including several that are not accessible in Standard Edition situations. Transparent Data Encryption (TDE), multiple container databases (CDB), partitioning, compression, parallel query, in-memory columnar format, sophisticated analytics, and other features are available. You may also receive Autonomous Database on Exadata Cloud@Customer in Oracle Cloud Infrastructure (OCI) or in the comfort and convenience of your own data center.
  • Latest infrastructure: Your on-premises devices might be several years old. Its maintenance period or lifetime may likewise be coming to an end. Because: (1) there is no hardware investment, and you pay only for the resources that you use in a subscription service; and (2) Autonomous Database operates on Oracle Exadata infrastructure that is specially and uniquely architected for the best Oracle Database performance in the cloud. Customers with substantial Standard Edition installations and data sovereignty concerns can also migrate to Autonomous Database by operating it on Exadata Cloud@Customer in their data centers or co-location facilities like Equinix.
  • Higher performance: Oracle Autonomous Database has been shown in the field to improve application performance. Autonomous Database optimizes OLTP, analytics, and mixed workload performance by utilizing strong Exadata technologies like Smart Scan query offloading, Smart Flash Cache, and Automatic Indexing, which are not available in Oracle Database Standard Edition. Queries are substantially quicker when SQL queries are optimized and data-intensive and compute-intensive workloads are offloaded to intelligent Exadata storage servers. Workloads will take fewer vCPU-seconds to complete, resulting in decreased expenses.
  • Consistent data protection and security: Applying the most recent software patch is usually easier said than done, especially when you need to patch your complete stack of server, storage, and networking infrastructure in addition to Oracle Database. Because of the difficulties of evaluating the compatibility of patches from different vendors, patching self-managed databases and infrastructure is sometimes delayed by more than three months after updates are released. Delays in patching expose consumer databases to security vulnerabilities, even when a workaround is available. When new security patches become available, Autonomous Database automatically and non-disruptively deploys them throughout the database stack.
  • High availability: Autonomous Database achieves more than 99.95% availability by combining completely redundant Exadata infrastructure with Oracle Real Application Clusters (RAC), Autonomous Data Guard, automatic backups, and automated failure detection and failover in OCI.
  • Automatic scaling: Autonomous Database adjusts vCPU use up and down depending on actual workload requirements, not on some speculative prediction of peak resources at some distant period in the future. Customers frequently outgrow both cloud and on-premises infrastructures to minimize the downtime and expense associated with upgrades. Autonomous Database in OCI and Exadata Cloud@Customer minimizes downtime by enabling online resource scalability and preventing over-provisioning by automatically optimizing both performance and cost. Typical on-premises systems obviously do not allow online resource consumption, and most database cloud services do not let it either. With Autonomous Database’s per-second pricing, you only pay for the resources you use, not what you anticipate you might need in the future.

Use BYOL to Lower Your TCO

Lowering the total cost of ownership (TCO) by shifting to the cloud is vital for a CFO or CIO. Moving to the public cloud eliminates the need for enterprises to pay for hardware and data center-related expenditures such as electricity, cooling, data center networking, and physical security. By automating database operations and monitoring, migration to Autonomous Database can save database management expenses by up to 80%. Furthermore,  automatic scaling can reduce runtime costs by charging you only for the resources that are necessary at any given time.

Even when comparing Autonomous Database to other general-purpose cloud architectures, Wikibon’s David Floyer found that migrating to Autonomous Database for transaction processing reduces total IT costs by 48% when compared to on-premises systems and reduces the expected cost of downtime by up to 70%. This is due to the design of the Autonomous Database, which provides increased speed, ultra-low latency, high degrees of consolidation, decreased downtime, and faster recovery.

When calculating TCO, keep in mind that Oracle provides Oracle Bring Your Own License (BYOL), which can significantly lower expenses. With this BYOL initiative, you may speed your cloud migration path by converting on-premises software licenses to similar, highly automated services in OCI. In the instance of this blog’s migration scenario, the BYOL program provides a 76% reduction on the price of Autonomous Database with License Included. You get the same 76% savings whether you bring Oracle Database Standard Edition or Oracle Database Enterprise Edition licenses to Autonomous Database, so bringing Standard Edition is very appealing.

For example, NEC Nexsolutions used this BYOL tool to migrate their retail application from on-premises Oracle Database Standard Edition to Autonomous Database. Aside from cost savings from BYOL, running Autonomous Database for a transactional processing workload on a dedicated Exadata environment in OCI resulted in improved security for NEC Nexsolutions, as well as access to autonomous functions, such as security threat detection and remediation capabilities.

Reduced Administration Efforts = Increased Opportunities and Lower TCO

Because it is a completely managed service, Autonomous Database helps enterprises to save up to 80% on database administration expenses. It reduces the time IT professionals spend on operational operations like provisioning, scaling, tuning, backups, and failover preparation, allowing them to focus on strategic business goals. Clearly, businesses must account for the impact of reduced labor costs in their TCO calculations.

Lowering Cloud Migration Risks and Costs

Any type of migration may be complicated, difficult, and expensive. Oracle provides cloud engineering services and free technical tools to simplify Oracle Database migrations from on-premises to OCI while minimizing unnecessary expenses.

With Oracle Cloud Lift Services, your IT teams can perform rapid cloud migrations with the help of Oracle Cloud professionals. These experts can advise you on Oracle migration strategy, architecture, prototyping, and management. Even better, all existing and new Oracle Cloud customers globally have access to these resources at no additional cost.

Oracle also provides two free migration technology options: Zero Downtime Migration and OCI Database Migration Service. Zero Downtime Migration is a solution that may be installed to allow a simplified and automatic migration with zero to near-zero downtime. It allows for migration between a wide range of Oracle Database sources and target configurations, such as from on-premises Oracle Database Standard Edition to Autonomous Database. OCI Database Migration Service, on the other hand, is a managed OCI service that utilizes Zero Downtime Migration. It migrates Oracle databases to OCI with little to no downtime and provides a simple user interface for a quick self-service migration experience. You can select a migration option depending on your company’s needs.

Additional Benefits that Future-Proof Your Cloud Investment

Moving your existing workload from on-premises Oracle Database Standard Edition to Oracle Autonomous Database provides you with additional benefits that will protect your cloud investment in the future.

Consider performance. Workloads on NEC Nexsolutions’ Autonomous Database operated more than five times quicker than previously. By utilizing Exadata’s database-optimized technology, automatic tuning, and indexing, Autonomous Database achieves low latency and high throughput. Higher performance results in multiple advantages for internal and external service consumers, including faster processing time for critical business transactions, the ability to serve more customers, improved customer satisfaction through increased responsiveness, faster-resolving issues, and more. As previously stated, quicker processing times minimize cloud usage costs since you only pay for resources consumed.

Autonomous Database increases the efficiency of your developers as they upgrade and grow your organization’s business applications. Autonomous Database, being a converged database that supports transactional, analytic, and data warehousing workloads, enables easier and faster software development and provides a wide range of features. In-database machine learning (ML) algorithms, support for a wide range of data formats (e.g., JSON, XML, relational, geographic, graph, IoT, text, and blockchain), and REST APIs enable development teams to build sophisticated applications without the need to link different services. Oracle APEX Application Development is also available from Autonomous Database for no-code/low-code application development.

Oracle Cloud Native Services, such as Container Engine for Kubernetes, allow developers to easily create and expand advanced applications by using a microservices design that incorporates database capabilities. Although some of the listed features are also accessible with Oracle Database Standard Edition, having access to a wide range of OCI services allows developers to focus on innovation rather than development tools and manual service integration.

Posted in OracleTagged OracleLeave a Comment on Move your Oracle Database Standard Edition to Oracle Autonomous Database to Save Money and get more Benefits

5 IMPORTANT FACTORS FOR SIZING YOUR ORACLE DATA SCIENCE ENVIRONMENT

Posted on May 12, 2022May 31, 2022 by Marbenz Antonio

The Most Important Aspects of a Data Science Career - DataScienceCentral.com

Most data science experiments begin with a few data files on a laptop and then use algorithms and models to anticipate a result. Although the method requires little initial computing, data scientists rapidly learn that their workloads demand more computational power than a local CPU or GPU can give. As a result, they’re compelled to create a rapidly scalable computing solution. The Oracle Cloud Infrastructure (OCI) Data Science platform fits into this category, addressing the need for changeable size and performance.

When a data scientist considers employing cloud resources, the following questions can be asked:

  • Is cloud computing capable of scaling up to meet the workload?
  • Can I achieve performance comparable to or better than my laptop or on-premises machines?
  • Is this a cost-effective move?

Data scientists may utilize the major criteria that drive responses to these questions as a guideline to prepare and get the most out of the OCI platform by reading this article.

Workload

The first stage in the process is to understand your workload and its peculiarities. A data science program’s workload is described as the quantity of computing work performed on a certain computational resource, such as a laptop, on-premises server, or a cluster of workstations. It covers memory, I/O, and CPU or GPU-intensive tasks. The workload is calculated by multiplying the number of CPU or GPU cores by the number of computation hours.

As a result, you must be aware of the following critical elements that influence workloads:

  • Your longest-running job has a maximum compute hour limit.
  • Your longest-running task at 80–90% CPU or GPU usage.
  • Average CPU or GPU utilization: What proportion of the CPU or GPU is used on average?
  • Average workload hours per day or month: The percentage of time your machine is active rather than idle.

The first two criteria identify the type of your most demanding task and provide an estimate of the number of computing cores and storage space needed to meet your performance goals. The remaining two elements are concerned with resource idle time and the cost implications for your performance target.

If the maximum computing hours exceed your performance requirements, you’ll need extra compute cores, and running workloads in OCI for higher performance may be beneficial. If your typical monthly workload hours are low and your machines are idle the majority of the time, OCI Data Science’s on-demand computing infrastructure can help you save money on idling charges.

Installation, configuration, data intake, and preparation may all be factored into your workload. These manual chores are required in all data science initiatives. These activities usually don’t require much computing power and may be completed with a small amount of CPU time. You may, for example, install Anaconda and configure it on a low-end Compute shape and then transfer it to a bigger core CPU or GPU as you run your algorithms and iterations.

Installing software many times, managing configurations, and dealing with version and library incompatibilities all take time away from productive work. Using a pre-built environment that functions consistently in a team context reduces this time to a fraction of what it would be otherwise.

CPU and GPU

The performance of your Data Science application is influenced by the CPU or GPU. To estimate the appropriate computing needs, it is necessary to grasp a few important factors.
Cores, threads, and clock speed are the most important characteristics of a CPU. Individual processors are stacked within the CPU chip as cores. Threads define how many tasks a CPU core can perform at the same time, often known as simultaneous multithreading or hyperthreading. The number of threads is usually double the number of cores.

A processor’s clock speed in GHz indicates how fast it can execute per second. When two processors have the same number of cores, the one with the higher clock speed performs better. Because the former has a greater clock speed, a 10-core Intel i9-10900 is similar to a 16-core AMD Ryzen 9 5950X.

A stream processor, CUDA or tensor cores, GPU VRAM memory, and memory bandwidth are all features of a GPU. AMD’s stream processors and NVIDIA’s CUDA cores refer to a single, exact calculation, whereas tensor cores compute a whole matrix operation per GPU clock cycle and are 5–8 times quicker. How fast the GPU can access and use the vRAM is determined by the GPU vRAM, built-in vRAM in a GPU card, and memory bandwidth in MHz. Executing the nvidia-smi tool on an NVIDIA GPU provides real-time GPU and vRAM usage while your task is running.

As a result, you may explore and decide the best CPU or GPU processor shape for your task. For example, MLPerf is an open-source measuring tool that compares CPU and GPU performance for both machine learning training and inference workloads and tools. Benchmarks for deep learning on the CPU and NVIDIA GPU can also give useful information.

Matching the setup with OCI Compute shapes leads to predicted performance on an OCI Data Science shape once you’ve determined the best processor for your job. However, because workload characteristics vary greatly, evaluating your personal workloads is the most reliable approach to determine what works best for you.

Environment

Your Data Science environment also has a big influence on Compute shape size and achieving the desired price-performance ratio. A high-capacity CPU or GPU isn’t necessary for installing or establishing an environment, or when performing data analysis or preparation tasks. Similarly, if you’re just getting started with algorithm testing or model training, a few CPU or GPU cores may be plenty.

When you increase your training burden, you’ll still need more cores and memory. The Compute form is influenced by how a trained model infers production data. Predictions on big batches of data and high-frequency concurrent queries, for example, necessitate CPU and GPU scaling in terms of cores and nodes. Latency is a critical aspect of inferencing, and increasing the number of cores or nodes in a computing structure is essential to strike the right balance.

Data science workloads are frequently run not just on various Compute shapes, but also in various Conda settings. Only GPU-based environments can employ PyTorch, TensorFlow, or CUDA, however ordinary pandas, sklearn, or Oracle Accelerated Data Science environments may do data exploration and preparation.

As a result, dividing the surroundings according to the type of work done might be beneficial. Working with distinct Conda contexts for a job is part of this approach. You need the ability to move between environments fast and grow to compute cores without compromising your installed environment base. To take advantage of price-performance and migration expectations, you’ll need to strike the correct mix between employing an on-premises environment and an on-demand OCI environment.

Existing segregated data science, for example, may be OK at first, but as a data science team grows, a consistent environment management technique and collaborative code-sharing are essential. Consider the implications and critical elements, such as performance, growth, cooperation, and cost, when assessing the total infrastructure.

Data residency and volume

In order to make data appropriate for data science processing, about 70% of each data science effort is spent on data input, cleaning, and preparation. When huge batches or streams of data need to be handled, this workload might grow exponentially, necessitating the use of a bigger, auto-scalable compute resource. When the essential data is spread across on-premises and several clouds, this procedure becomes much more difficult. Consolidation, deduplication, and consistency of data become significant challenges.

In such cases, copying the essential data to a common platform, such as a data lake, which you can then process with data transformation tools located closer to the data, is often the simplest and most cost-effective solution. This architecture necessitates the adoption of an auto-scalable storage solution, and on-demand solutions such as OCI Lakehouse can help.

Cleaning and preparation methods differ between structured and unstructured information, as well as between smaller batches and streaming datasets. For example, SQL-based cleaning operations inside an OCI autonomous database or Apache Spark-based in-memory batch processes may successfully clean huge relational structured data. Because a bigger data collection necessitates greater storage, access, and data processing time, lakehouse sizing becomes important.

Moving data science models closer to data, on the other hand, is a critical component that allows a data scientist to focus on the most important activities, such as model development, selection, and predicted explainability, rather than transferring enormous amounts of data. When data is spread, data scientists may run workloads in silos, however, when data is consolidated, they may run workloads together on a single platform like OCI. The principles are the same in both methods, but the latter is easier to administrate and apply. You can effectively scale computing and storage for data science outcomes in these circumstances.

Architecture and Integration

The architecture of your data science tools and components relates to how they integrate and interact with one another. Evaluate various tools, their production usage, and their interaction points to determine how they can deploy or grow with a data science implementation if your data science infrastructure is coupled to on-premises solutions or a cloud vendor. The smaller the criticality and reliance on integration points, the easier it will be to migrate, scale, or burst into the cloud.

If your environment is imaged and infrastructure is scripted, as in Terraform, deploying on OCI and taking advantage of the performance, flexibility, and cost savings it provides can be more beneficial. In another case, moving big, rare training workloads to the cloud while maintaining production inference workloads on-premises can improve overall performance and save money. This hybrid cross-cloud architecture with right-sized Compute shapes may have a significant influence on overall company performance and profitability if properly automated and reported.

Conclusion

You may examine your existing or anticipated Data Science environment by evaluating all of these aspects and deciding if transferring your data science workloads entirely or partially to OCI makes sense for your workload. When you have a concept, utilize the Oracle Cloud Estimator to see how your analysis compares to a small, medium, or big implementation. After that, choose reference architectures and Data Science.

You may also tailor and right-size your Data Science environment to meet your specific requirements. Oracle Cloud Infrastructure Data Science is dedicated to assisting you in making the best decision possible.

Posted in OracleTagged OracleLeave a Comment on 5 IMPORTANT FACTORS FOR SIZING YOUR ORACLE DATA SCIENCE ENVIRONMENT

IN ORACLE CLOUD INFRASTRUCTURE, HOW DO YOU ENABLE AUTONOMOUS LINUX INSTANCES FOR OS MANAGEMENT?

Posted on May 12, 2022May 31, 2022 by Marbenz Antonio

Oracle Autonomous Linux, Game is Changing - Davoud Teimouri - Virtualization and Data Center

The first and only autonomous operating environment is Autonomous Linux, which is based on Oracle Linux. It maintains the operating system patched automatically, reducing complexity and human error while also increasing security and availability. Oracle OS Management Service in Oracle Cloud Infrastructure now manages Autonomous Linux instances by default (OCI). You may easily transition historical Autonomous Linux instances that were installed using the July 2021 or previous images to make use of the functionality provided by the OS Management Service integration.

Automatic discovery and OS management

Before creating an instance in OCI, you must first fulfill the conditions and set up the OCI policies to allow management. The Autonomous Linux image is accessible as an OS platform image in OCI and may be installed in a matter of seconds. OS Management Service automatically discovers and manages autonomous Linux instances deployed from current platform images in OCI. Through the Oracle Cloud Console, you can establish the daily auto-update schedule and monitor important system events and resources using the Autonomous Linux interface with OS Management Service. You may manage Autonomous Linux instances with Oracle Linux and Windows Server instances with OS Management Service using a single user interface. Customers of OCI can use the OS Management and Autonomous Linux services for free.

A screenshot of the Autonomous Linux dashboard in OS Management service.

Migrating legacy Autonomous Linux instances

By default, Oracle Autonomous Linux instances using the August 2021 Oracle-Autonomous-Linux-7.9-2021.08-0 platform image or later are connected to OS Management Service. The alx-migrate script may be used to migrate instances installed with an Oracle Autonomous Linux July 2021 Oracle-Autonomous-Linux-7.9-2021.07-0 image or earlier to interact with OS Management Service. When you use the alx-script to migrate Oracle Cloud Marketplace images, such as the legacy Autonomous Linux version of Oracle Linux KVM, you may make use of the newest capabilities in OS Management through the Console.

The alx-migrate tool cannot be used to upgrade autonomous Linux instances installed on Oracle Cloud Free Tier Compute resources to interface with the OS Management service. OS Management Service does not support instances placed on Free Tier computing.

The methods below demonstrate how to transition existing traditional Autonomous Linux instances to OCI using OS Management. The manual contains extensive instructions.

1. For Autonomous Linux, configure the needed OCI Identity and Access Management (IAM) rules. Create a dynamic group for the OS Management service that defines the group members, and add a rule for the dynamic group that defines the set of instances allowed in the policy. In either your tenancy or compartment, make sure you’ve specified the relevant Identity and Access Management (IAM) policies for Autonomous Linux.

2. On the instance, make sure the OS Management Service Agent and Oracle Autonomous Linux plugins are turned on. Enable these plugins if they are currently disabled.

3. Install alx-migrate:

$ sudo yum install alx-migrate

4. Run alx-migrate:

$ sudo alx-migrate

5. Accept the OS Management Service terms of service. Run the alx-migrate script with the —accept-terms or -a option to accept the terms automatically.

$ sudo alx-migrate --accept-terms

Check to see if the migration was successful. To assist you in resolving probable issues, the documentation includes a message table.

The alx-migrate program removes the al-config RPM RPM package after a successful migration because it is no longer required. Instead of using the al-config application, the Autonomous Linux service controls autonomous settings through the Console after migration. See Managing Autonomous Linux Settings for further details.

Posted in OracleTagged OracleLeave a Comment on IN ORACLE CLOUD INFRASTRUCTURE, HOW DO YOU ENABLE AUTONOMOUS LINUX INSTANCES FOR OS MANAGEMENT?

THE NECESSITY OF PROVIDING UNIQUE SERVICE EXPERIENCE WITH DIGITAL-FIRST CUSTOMER SERVICE

Posted on May 12, 2022May 31, 2022 by Marbenz Antonio

Verint Acquires Conversocial; Aims to Accelerate Digital-First Customer Engagement

Customer service has evolved into much more than just addressing complaints and resolving difficulties throughout a customer’s end-to-end experience. From the initial inquiry to product purchase, usage, and support, service is now a vital aspect of the whole customer experience (CX).

Consumers’ expectations of the brands they select are changing as well. According to a Five9 poll, 83% of decision-makers felt that the customer service experience is critical for client retention, and nearly all (99%) stated that customer service is crucial and necessary for their organizations.

We’ll look at how you stay up with evolving consumer expectations by providing really distinctive customer service experiences in this blog on digital-first service.

A successful digital-first service strategy requires unique service experiences

Given the growing importance and extent of service, a customer’s connection with the service department will most certainly be the most common—and important—brand engagement. As a result, providing unique and personalized digital-first service experiences to customers is becoming a necessity—one that is closer than you would think.

Customer care that is one-of-a-kind does not mean providing concierge-level assistance to every customer or finding fresh solutions to every problem. It all boils down to using data to have a deeper understanding of your clients.

Every business has access to a variety of information gleaned from sales, service, marketing, supply chain, and other departments. Break down your data silos and use the information to better understand your consumers’ requirements, interests, historical trouble areas, conversational styles, and special characteristics. Then, to assist you to allow unique experiences, build and organize service procedures that will eventually result in happy and loyal customers.

Tracking online surfing data, for example, may be used to improve a potential customer’s learning and inquiry experience by leading them back to their chosen homepage when they return days later. When a consumer logs in to your site after delivery, track the status of their order and instantly question if the shipment came as expected. Alternatively, segment consumers to provide personalized service experiences depending on the current context, such as an impending winter storm or their latest payment plan setup.

Three ways you can facilitate unique service experiences with Oracle Service

Oracle Service helps your company to provide exceptional customer service. Oracle Service can help you with capabilities like Agent Insights, Digital Assistant, and a direct Oracle Unity integration.

1. Empower your employees with a single, dynamic view of the customer

Our modern agent portal provides real-time client intelligence to agents, allowing them to provide tailored support. They may use AI to identify the best next steps and create a personalized experience for each consumer.

2. Deliver persistent customer intent

Use historical data to forecast what a client will try to do during their next engagement with your business, and then spread that knowledge across all channels (digital self-service or supported assistance) to make the process as simple as possible. Set all service channels to bypass the regular service flow and instead enquire about their interest in a late check-out if your hotel visitor is due to leave today.

3. Personalize interactions with modern assisted-service channels

Give customers access to digital channels they anticipate, such as live chat, social messaging, and SMS. Offer high-touch visual interaction alternatives like video chat and screen co-browsing for even more customized service.

Organizations may stay competitive and satisfy contemporary consumer expectations with digital-first service experiences by providing unique customer service. Check out the previous posts in our blog series to discover more about Oracle’s current vision for delivering holistic brand experiences with service, and stay tuned for the next topic on increasing your CX with hyper-convenient service.

Posted in OracleTagged OracleLeave a Comment on THE NECESSITY OF PROVIDING UNIQUE SERVICE EXPERIENCE WITH DIGITAL-FIRST CUSTOMER SERVICE

Everything You Need to Know About Data Science Trials

Posted on March 22, 2022June 1, 2022 by Marbenz Antonio

When productivity and cooperation are strained, machine learning models can’t be audited or replicated, and models aren’t making it into production, it’s time to switch to a data science platform. With enterprises seeking to integrate diverse data sources and applications, data integration has become increasingly difficult.

If this describes your company, it’s time to try out a data science platform. While determining the ideal solution might be time-consuming, we’ve compiled a list of must-have components for a successful machine learning experiment.

In summary, your objective should be to identify a data science platform that solves the difficulties you face daily as a data scientist so that you can effectively drive business outcomes. This involves searching for a platform that provides a set of tools to assist you to do your job faster while also allowing it to be shared, audited, replicated, and scaled.

Know Which Data Science Problems You’re Trying to Solve

As you are aware, the nature of a data scientist’s work requires very little computing on some days and a great deal on others. This fluctuating workload can be difficult for IT, which may also have to deal with the pressures you place on databases or your requirements for increased security levels when you operate in production environments. A solid data science platform may help data scientists and their teams reduce their reliance on IT while increasing their productivity and efficiency.

Other data-science-related issues include:

  • The provenance of data and models
  • Keeping track of code versions
  • Notebooks are shared.
  • Using pipelined processes to speed up workflow
  • Once models are in production, they must be able to be replicated and audited.
  • Large volumes of data must be stored and moved.
  • Model deployment is separated from engineering, allowing them to own models from start to finish.

Keep in mind that once you’ve decided on a data science platform, you’ll need to submit your findings to IT. When you do, keep in mind that you’ll be able to operate more effectively without increasing expenses, compromising security, or demanding round-the-clock assistance.

What Should You Evaluate in a Machine Learning Trial?

Instead of spending hours on the phone with customer support reps, seek free or low-cost machine learning trials that let you at least a month to check out multiple services. Some trials include real-team coaching, but choose for ones that are automated and straightforward to use—you’ll have plenty of time to speak with a provider when you’re ready to move forward.

The following is a checklist of critical elements to consider while conducting a data science trial:

Data Science Service Set-up

One of the first things you’ll want to do is set up your primary work environment and assess your available resources. Keep an eye out for:

  • A data catalog service that uses a structured inventory of data assets to discover and regulate data.
  • There are a few example notebooks or lessons available to rapidly get you up to speed on the tools. They must present examples that are relevant to your process.
  • The flexibility to apply numerous tools and libraries seamlessly, as well as share notes with coworkers, to improve productivity.

Running Big Data Application

Running Spark on-prem can be difficult for data scientists since the systems are designed for production workloads rather than the bursty ad hoc workloads that data scientists generate. One of the most compelling reasons to use a cloud-based data science platform is this. Make sure the capabilities for big data applications are included in your data science trials:

  • Is well-suited for use in a laptop environment.
  • Batch and ad hoc processing are available.
  • Provides centralized application control and visibility.

For example, Oracle Cloud Infrastructure Data Flow, Oracle’s data science platform, supports MLlib in Spark, allowing you to create models using industry-standard methods. It’s serverless, which means data scientists may quickly supply only the resources they need to perform a job before destroying the cluster. The goal of a data scientist’s job is to put business insights and machine learning models into production. Patching, updating, and controlling the clusters are of no use. The serverless method relieves you of that load, allowing you to concentrate on areas where you can provide actual value to the company.

Cloud Analytics & Autonomous Databases

Strong cloud analytics and access to self-contained datasets are signs of a mature data science platform. Make a point of looking for:

  • The ability to create a temporary database easily.
  • The ability to create models by applying computation to data
  • Analytics solutions that can deal with data from different sources in a transparent manner
  • Data transfer is minimized using scale-out processing.
  • Databases with built-in machine learning tools

It proposes connecting to the Oracle Autonomous Database and playing with its data visualization capabilities on Oracle’s data science platform. To verify the simplicity of data migration, and also propose putting up Oracle Autonomous Data Warehouse and utilizing the example data in the SH scheme, or loading your data. Finally, try Oracle Machine Learning to discover how you can simply train, test, and tweak machine learning models using data science notebooks while the database handles the heavy work.

Block Storage and Data Integration

Ensure that your data science platform solution provides unlimited storage at a low cost, as well as easy interaction across databases and other data sources. Add the following to your to-do list:

  • A platform where you don’t have to worry about provisioning and maintaining your infrastructure.
  • Payment alternatives keep costs low by requiring payment only when infrastructure resources are used.
  • A solid data integration to the block-storage pipeline. The speed and ease of use of the solutions’ extract, transform, and load (ETL) functions will give you a decent indication of how well they integrate.
  • Possibility of replicating vast amounts of data and then discarding it

Data Catalog

The data catalog in your data science platform is critical to discovering, finding, organizing, enhancing, and tracing data assets. During your trials, you should search for the following features:

  • Self-service tools that assist you in finding and managing data throughout your organization.
  • Transparency and traceability help governance and audibility by allowing you to know where data comes from.
  • Data management processes may be automated to help you increase productivity at scale.

Innovative New Data Science Tools

Each data science platform will have cutting-edge technologies you may not be aware of. Keep track of which solutions provide the kind of advances that best suit your requirements and budget. It should help you improve your workflow by speeding up repetitive tasks and giving you the chance to add more value to the company.

Key Notebooks to Test Oracle’s Accelerated Data Science SDK

The Oracle Accelerated Data Science (ADS) is one of the platform’s standout features. ADS is a native Python library included in the Oracle Cloud Infrastructure Data Science service that includes capabilities for the whole predictive machine learning model lifetime. Data gathering, visualization, profiling, automated data transformation, feature engineering, model training, model assessment, model explanation, and recording of the model artifact itself are all included.

Once you’ve got your model, you may use its characteristics to conduct machine learning explainability. It should be agnostic to model structure and offer you knowledge of how the model works so that you can trust that it has learned the right things and that you can check for bias in the model. After you’ve done that, you can rest assured that it will function admirably once it’s in production.

When trying out ADS, strongly advises you to try out the following notebooks:

1. Working with an ADSDataset Object (adsdataset_working_with.ipynb): The data itself is one of the most critical aspects of any data science effort. This notebook will show you how to use the ADSDataset class. The ADSDataset is similar to a data frame, but it comes with a lot of extra capabilities that will help you optimize your workflow.

Why it is important: Having a powerful way of representing your data in the notebook will improve your performance. The ADSDataset allows the data scientist to work with data that is larger than what will fit into memory but manipulate it as if we’re all in memory. Also, it has features that link the data to the type of problem that you are working with. It allows you to define the dependent (target) variable that the ADS model will understand and it also helps in exploring the data.

2. Introduction to Loading Data with the Dataset Factory (datasetfactory_loading_data.ipynb): This notebook explains how to read data from a variety of common formats using ADSDataset. There’s no need to learn a new package for each data source or format because the DatasetFactory.open() function takes care of everything.

Why it is important: This notebook explains how to read data from a variety of common formats using ADSDataset. There’s no need to learn a new package for each data source or format because the DatasetFactory.open() function takes care of everything.

3. Introduction to Dataset Factory Transformations (transforming_data.ipynb): It is critical to recognize and correct data condition concerns to get the maximum performance out of your model. Different transformations should be employed depending on the type of model being utilized. This notebook demonstrates how ADS may assist you with this.

Why it is important: Cleaning up data condition concerns takes up a lot of a data scientist’s work. The ADSDatasetFactory class makes it simple to locate and resolve these issues. It also offers an automated workflow for the data scientist to follow.

4. Classification for Predicting Census Income with ADS (classification_adult.ipynb): You may use the OracleAutoMLProvider tool to create a classifier for the public Census Income dataset in this notebook. This is a binary classification task, and the dataset may be available at https://archive.ics.uci.edu/ml/datasets/Adult for additional information. You can investigate the Oracle AutoML tool’s numerous settings, which allow users to exert control over the AutoML training process. Finally, you may compare and contrast the various Oracle AutoML-trained models.

Why it is important: The ADS SDK includes a set of strong tools that are based on open-source libraries. This notebook shows how to use AutoML to create high-quality models in a real setting.

5. Introduction to Model Evaluation with ADSEvaluator (model_evaluation.ipynb): 

You can examine the possibilities of the ADSEvaluator, the ML evaluation component of the Accelerated Data Science (ADS) SDK, in this notebook demo. You’ll learn how to apply it to evaluate any broad class of supervised machine learning models, as well as to compare models within the same class.

This notebook covers binary classification with an asymmetric data set, multi-class classification using a synthetically created data set of three evenly distributed classes, and finally a regression issue. Open-source libraries would be used to train the models, which would then be assessed using ADSEvaluator. It highlights how ADSEvaluator may improve the tools you already use.

Why it is important: The process of evaluating models is quite conventional. The ADSEvaluator shortens the process by deciding which metrics to examine and then calculates them for you.

6. Model Explanations for a Regression Use Case (mlx_regression_housing.ipynb):

You will use this notebook to do an exploratory data analysis (EDA) to better understand the Boston housing dataset. The Boston housing dataset is a regression dataset that provides information about houses in Boston, Massachusetts’ various areas and suburbs. The target variables are continuous values that indicate the house’s monetary value.

You’ll train a model to forecast home prices, then assess how well it generalizes to the situation. When you’re happy with the model, you may investigate how it works utilizing model-agnostic explanation approaches. You’ll learn how to create global explanations (to assist you to understand the model’s overall behavior) as well as local explanations (to understand why the model made a specific prediction).

Why it is important: Understanding what a black box model is doing can be difficult. It’s also crucial to check for bias and ensure that the model has learned the proper information. The data scientist can achieve this with the use of machine learning explainability (MLX).

Posted in OracleTagged Oracle, Oracle for BusinessLeave a Comment on Everything You Need to Know About Data Science Trials

6 Ways to Make AI Adoption Easier

Posted on March 22, 2022June 1, 2022 by Marbenz Antonio

AI will become more industry-specific

According to Elad Ziklik, vice president of product management for AI services and data science, most firms aren’t in the business of constructing AI models. Many firms want a Lego kit, a comprehensive set of all the tools and instructions they need for a given use case, rather than a disjointed collection of construction pieces with no clear instructions, according to Ziklik.

As a result, AI will increasingly gravitate toward domain-specific models in areas such as finance and manufacturing. However, these models need a significant amount of data as well as subject expertise, both of which Oracle possesses.

“Oracle is the only software firm that has a general-purpose cloud platform as well as being a world leader in SaaS apps and domain-specific apps,” he added.

With that, he announced the Oracle Cloud Infrastructure (OCI) AI Services, which would allow businesses to apply pre-trained AI models to their operations without the requirement for in-house AI or machine learning professionals.

Prioritize identifying the business issue

Many presenters at Developer Live agreed that defining the business problem first is critical for your team to discover the proper data and models for your use case.

Suhas Uliyar, vice president of product management, explained, “It starts with the defining of the problem and then obtaining the correct data.”

“Start thinking about what issues you might want to tackle in the future, and how you might start gathering and classifying data today to prepare for them,” said Ian Wilson, IntelligentSuite’s director of engineering.

It is important to prepare the data

More than merely constructing models is required in the machine learning lifecycle. According to an IDC and Oracle analysis, over half of the time spent on AI initiatives is spent integrating and maintaining data rather than doing data science tasks.

“While there is a lot of attention today on algorithms, the many models that are accessible, and the services that are available, I feel that data is a new programming language – that is, you begin to manage the AI model by managing its training data,” Uliyar stated.

To make the greatest use of your data, you must first understand what you have. OCI Data Catalog, a metadata management solution that makes it simpler to identify useful trustworthy data in the company, was discussed by Abhiram Gujjewar, director of product management.

Data science teams must also prepare and process data before it can be used in modeling. OCI Data Integration, a fully managed serverless ETL service, and OCI Data Flow, which helps customers deliver Apache Spark-based applications faster, were highlighted by Carter Shanklin, senior director of product management, and Julien Testut, senior principal product manager, as two solutions for more easily integrating and preparing data for data science.

AutoML speeds up data science work, but it won’t take the position of people

“While a major portion of the machine learning process can be automated,” said Mark Hornick, senior director of data science and machine learning product management, “there are restrictions we face today.”

AutoML, for example, removes repeated activities in model development, allowing teams to enhance productivity while lowering computation costs. Business issue conceptualization and data interpretation, on the other hand, need a human viewpoint; individuals provide domain-specific expertise to these phases and determine success criteria, according to Hornick.

To speed up model training, OCI Data Science includes its AutoML engine as part of the Accelerated Data Science (ADS) toolbox.

“With this AutoML engine, you can effectively take a dataset and spit out a really solid candidate for a model for that data in a very short amount of time with very little work,” Elena Sunshine, director of product management, explained.

AutoML’s job is to complement, not replace, the work of data scientists.

Don’t overlook the use of machine learning

The process isn’t complete once you’ve developed the model; you still need to deploy it and monitor it while it’s in use. Machine learning models are arithmetic functions, according to Marcos Aranciba, product manager for data science and big data, and “model deployment is transforming that math formula into a result.”

He notes many important hurdles for machine learning deployment, including developing a model in a different environment from where it would be deployed and integrating models into end-user applications. In this talk, you’ll learn more about the obstacles of machine learning implementation and how Oracle Machine Learning tackles them.

Data science teams include more than just data scientists

Last but not least, AI isn’t only for data scientists. Anyone who uses AI, from engineers to corporate users, is welcome to participate.

Developers should learn about machine learning principles and methods, according to Hornick. “This will better prepare you to collaborate with your organization’s data science staff,” he stated.

The OCI AI Services, according to Ziklik’s presentation, seek to “empower data science teams to efficiently engage with developers, data engineers, and operators, and provide AI-powered solutions” by delivering pre-trained AI models.

Posted in OracleTagged Oracle, Oracle for BusinessLeave a Comment on 6 Ways to Make AI Adoption Easier

Creating a more Secure Connection through Automation

Posted on March 22, 2022June 1, 2022 by Marbenz Antonio

Automation, Automation, Automation Secure Transport Layer Security (TLS) connections, such as those used by an e-commerce website, require this. An expired certificate is one of the most common causes of TLS connection failure. To avoid your application being interrupted, automating the renewal and distribution of certificates is a great practice. Oracle Cloud Infrastructure Certificates (OCI Certificates) is a new cloud X.509 certificate service meant to aid with certificate administration for TLS connections. Create private Certificate Authority (CA) hierarchies and TLS certificates with the OCI Certificates service. Simultaneously, OCI Certificates allow you to construct as many CA branches as you need, up to 10 layers deep.

This will not only provide you with a 30-day buffer in case of an issue, but it will also limit the attack vectors available if your private key is ever compromised. You must revoke the certificate if your private key is compromised, which will put the certificate on the Certificate Revocation List. When a client downloads a certificate, it consults the CRL to see whether the certificate is still valid. The CRL, on the other hand, has several drawbacks. Shortening the validity periods of your certificates will not fix the problem, but it will help to shorten the time of your vulnerability if you are ever hacked.

OCI Certificates need the usage of a Hardware Security Module key to construct a CA; you can get up to 20 HSM keys for free with OCI Vault. All subordinate CAs and certificates must be revoked and replaced if your CA’s private key is compromised. OCI Certificates overcome this by storing the private key in a single location, the HSM, and restricting access to it. Use different HSM keys for each CAs. This benefit is lost if you use the same key for several CAs.

The CRL list may be managed automatically by CAs generated under OCI Certificates. When you create an OCI object bucket, the list is automatically generated. When a certificate or CA is revoked, it is added to the CRL immediately.

Managing your Certificates

With the new OCI Certificates service, you can manage your certificates in three ways. Internal CA issues the first approach, which is a completely automated path. Your private CA generates the certificate and seamlessly deploys it to linked services like OCI Load Balancers. Your certificates are watched and automatically renewed and deployed in this way.

If you have a policy that requires the private key to be stored on-premises, the second approach is issued by an internal CA and controlled externally. If this is the case, you can generate a Certificate Signing Request (CSR) and upload it to the certificate service, which will allow your CA to generate the certificate.

Finally, if your certificates are from a certain vendor, utilize imported certificates. Similar to the managed external technique, after the certificate has been submitted, it will be automatically deployed to the load balancer, and you will be notified when the certificate needs to be renewed.

OCI Certificates may save up to 10 different copies of your certificates. The stage column in the versions field can quickly reveal which certificate is currently in use. The current stage of the active certificate will be the current stage. Versions from the past will be at the previous stage. If you manually renewed the certificate but did not have it automatically deployed to a resource, the certificate will be in the pending stage.

Associations are another tool that can help you keep track of which certificate is installed on which OCI resource. The certificate’s associations show you which resources are utilizing it. Furthermore, you will not be able to remove the certificate as long as an association exists for it. This reduces the risk of human mistakes while maintaining certificates.

In addition to the console, there is a strong API for automating use cases. You may either automate the procedure for non-integrated services or download the certificate for on-premise use. You can upload your certificates and assign them to your resources if you use a third-party management provider. OCI Certificates allows you the ability to handle your CAs and certificates in the cloud, regardless of your use case.

Three Cloud Guard detectors will be included with OCI Certificates. Notifications will appear in Cloud Guard if a CA is destroyed or revoked. A Cloud Guard notice will also be triggered whenever a CA bundle is modified, allowing you to double-check that your cloud resources’ chain of trust is valid. In the future, more Cloud Guard detectors will be connected with Certificates.

Conclusion

OCI Certificates has simplified a long and sometimes perplexing process of generating CAs and certificates. You may establish your CA hierarchy, produce certificates, and deploy them automatically to integrated resources like the load balancer in only a few minutes. You won’t have to worry about disruptions caused by expired certificates if you use automated renewals. Shorter validity periods also assist to decrease exposure in the event of a breach. OCI Certificates is a completely free service that you may start using right now.

Posted in OracleTagged Oracle, Oracle for BusinessLeave a Comment on Creating a more Secure Connection through Automation

Why is it preferable to use a multi-cloud solution for your business?

Posted on March 22, 2022June 1, 2022 by Marbenz Antonio

Times have changed if you just use one public cloud vendor.

Cloud was new and unique ten years ago. You needed to learn a new infrastructure, operations, and deployment technique. You saved time and money by using only one cloud provider, such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud. However, you retained mission-critical workloads on-site.

You created large apps totally on the cloud five years ago. You put your production workloads in the cloud with confidence. You adopted the cloud as a component of your company’s infrastructure. However, your decision to cling to one cloud was out of date. Because each cloud was created to tackle a specific problem, it has evolved with its own set of strengths and capabilities. You discovered performance issues, unanticipated cost overruns, and incompatibilities between the cloud’s architecture and your business workloads as you attempted to transfer increasingly sophisticated and demanding applications.

You’re still migrating to the cloud, but supplier diversification is a must in any technology. You obtain a competitive edge, regional resiliency, and pricing leverage by utilizing several clouds. When considering alternative cloud suppliers, consider the disruptor who has already developed a next-generation cloud using the lessons learned early—Oracle Cloud Infrastructure (OCI).

It’s not you who’s the problem; it’s your cloud

Despite numerous pro-cloud policies, such as cloud-first or cloud-only, many corporate workloads stay on-premises. Migrating them to the cloud hasn’t been as simple as expected, and there’s a reason for that.

Many business workloads may be divided into the following groups in general:

  • Web-scale or mobile-scale
  • Productivity
  • Systems of record or process control
  • High-performance computing (HPC)

Web-scale workloads are designed to adapt to large fluctuations in demand, such as a Christmas sale or a viral video. For example, if there is a rapid increase in sales traffic, extra web servers can be added and subsequently removed when sales decline. Similarly, Google was built for a similar reason, with its Search and Gmail programs being used all over the world.

Workloads for productivity support a large number of users in a consistent condition of use. Examples include document exchange, identification verification, and group messaging. Microsoft’s Azure cloud was created with these requirements in mind for its Office and Xbox communities.

Both of these workload kinds have the same design pattern of compartmentalizing operations and adding extra resources as demand grows. When one component fails, it has little or no impact on the remaining components. Any component should be expected to fail, according to the cloud suppliers’ reference designs, and the application should be designed to support this failure.

Unfortunately, the majority of corporate workloads fall into the last two categories of systems of record or process control and HPC, where scaling out is not possible and component failure is unacceptable. To put it another way, most cloud designs do not fit the designs of most business workloads.

Systems of record and process control are typically monolithic, necessitating scale-up rather than scale-out with larger servers. Because they are monolithic, a single server failure affects the entire program, resulting in costly downtime.

HPC requires bare-metal servers that are free of hypervisors and connected to high-bandwidth, low-latency networks that connect these servers to a distributed system.

OCI is a next-generation cloud built to handle these types of business workloads. You may transition existing systems of record and process control applications to high-reliability servers, whether virtual or bare metal, with little or no changes. The bare-metal servers connected via hyper-converged networking can be used for HPC.

OCI is a cloud that enables you to complete your cloud plan by migrating current mission-critical workloads to the cloud.

Living in a multi-cloud world

Is this to say that OCI is the only cloud worth using? No.

The question is not about whether a cloud should be used, but which clouds should be used to maximize value. Gartner found that four out of every five businesses use at least two public clouds. The world of multi-cloud is already here.

Multi-cloud, on the other hand, entails much more than employing services from each cloud separately. Multicloud is about linking each cloud’s greatest services and capabilities. Azure provides support for essential services such as Azure Active Directory, Microsoft 365, and the Xbox network.

Oracle built OCI with enterprise-grade dependability, bare-metal performance, and hyper-converged networking in mind. This architecture enables mission-critical applications on single-server scales that other clouds do not, as well as huge, bare-metal HPC clusters. These services are expressly priced cheaper than those offered by other cloud suppliers. OCI also offers cloud services including virtualized computing, storage, and containers, as well as other cloud services.

To experience multi-cloud, your data must travel between clouds with little friction, allowing you to use the services from each cloud as you see appropriate. One cloud, for example, can gather data from multiple devices, transport it to OCI for high-performance processing, and then archive it in another cloud.

OCI also facilitates genuine multi-cloud by charging reduced per-byte rates on data leaving a cloud. We anticipate OCI to work with other clouds, and we’ve designed the egress rates to encourage this by making it 10 times cheaper. Oracle is also a member of the Cloud Bandwidth Alliance, which allows members to exchange data for free or at a discounted cost. Other cloud companies have developed a “walled garden” pricing model, which imposes considerably higher egress costs to encourage users to retain their data in a certain cloud.

If you’re using Azure in a supported area, Oracle has teamed up with Microsoft to provide Azure Interconnect, which offers the highest, low-latency connection of fewer than 2 milliseconds with no egress fees between the two clouds. This interconnection serves as a frequent use case for mutual clients that have invested in Microsoft and Oracle technologies and wish to shift to the cloud without sacrificing network speed.

Best for Oracle Database

Naturally, Oracle created the next-generation OCI to provide the greatest possible environment for Oracle Database service. Customers may simply migrate from on-premises to OCI with minimum or no changes by using the Oracle Cloud Lift service, which provides free professional help.

Oracle Database may be operated on virtual machines for added flexibility, as well as on specially designed hardware for optimal performance and dependability. Because Oracle licenses grow with the hardware, many clients find that switching to OCI from on-premises lowers their overall cost while improving performance.

Posted in OracleTagged Oracle, Oracle for BusinessLeave a Comment on Why is it preferable to use a multi-cloud solution for your business?

Posts navigation

Older posts

Archives

  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • March 2020
  • December 1969

Categories

  • Agile
  • APMG
  • Business
  • Change Management
  • Cisco
  • Citrix
  • Cloud Software
  • Collaborizza
  • Cybersecurity
  • Development
  • DevOps
  • Generic
  • IBM
  • ITIL 4
  • JavaScript
  • Lean Six Sigma
    • Lean
  • Linux
  • Microsoft
  • Online Training
  • Oracle
  • Partnerships
  • Phyton
  • PRINCE2
  • Professional IT Development
  • Project Management
  • Red Hat
  • Salesforce
  • SAP
  • Selenium
  • SIP
  • Six Sigma
  • Tableau
  • Technology
  • TOGAF
  • Training Programmes
  • Uncategorized
  • VMware

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

home courses services managed learning about us enquire corporate responsibility privacy disclaimer

Our Clients

Our clients have included prestigious national organisations such as Oxford University Press, multi-national private corporations such as JP Morgan and HSBC, as well as public sector institutions such as the Department of Defence and the Department of Health.

Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
Client Logo
  • Level 14, 380 St Kilda Road, St Kilda, Melbourne, Victoria Australia 3004
  • Level 4, 45 Queen Street, Auckland, 1010, New Zealand
  • International House. 142 Cromwell Road, London SW7 4EF. United Kingdom
  • Rooms 1318-20 Hollywood Plaza. 610 Nathan Road. Mongkok Kowloon, Hong Kong
  • © 2020 CourseMonster®
Log In Register Reset your possword
Lost Password?
Already have an account? Log In
Please enter your username or email address. You will receive a link to create a new password via email.
If you do not receive this email, please check your spam folder or contact us for assistance.