Articles/Institutional/Business

Insights and trends

Explore specialized content on technology, innovation and good market practices.


Or browse the topics:
By Bernardo Costa e Marcelo Trojahan August 28, 2024
Artificial Intelligence (AI) is a transformative force in today’s digital landscape, yet terms like Machine Learning (ML), Deep Learning (DL), and Generative AI (GenAI) are often used interchangeably, leading to confusion. This article aims to demystify these concepts, highlighting their differences and significance in the current industry. Investments in AI are becoming increasingly substantial ( “Stanford AI Index report” published at World Economic Forum , KPMG , Unite.AI , All About AI ). However, many people still lack a clear understanding of the differences between AI, ML, DL, and GenAI. This gap often leads to unrealistic expectations and misdirected investments. So, how do these technologies differ, and how can we apply them effectively? Our goal is to provide a clear understanding of these concepts, presenting the perspective of authors Ian Goodfellow, Yoshua Bengio, Geoffrey Hinton, and Andrew Ng. They have extensively addressed these concepts and their applicability in solving specific problems, which will be discussed throughout this article. What is Artificial Intelligence (AI)? AI is a broad field within computer science that aims to create systems capable of performing tasks that typically require human intelligence. It encompasses various technologies, from rule-based systems to advanced algorithms that can learn and adapt. Implementing AI can be complex and expensive, requiring high-quality data and sophisticated algorithms. AI is classified into two categories: Narrow AI, designed to perform specific tasks, and General AI, which can perform any intellectual task a human can. Within AI, Machine Learning (ML), Deep Learning (DL), and Generative AI (GenAI) are specialized subsets, each designed for different objectives and varying in complexity. These technologies, while interconnected, serve distinct roles in advancing AI’s capabilities. 
By e-Core August 13, 2024
This case study explores the collaboration between e-Core and a leading pulp producer, who faced a critical challenge: assessing the potential impact of frost on paper production. Frost poses a significant threat, ranging from minor disruptions to severe losses in cellulose output. Given the unpredictable nature of forest asset management, especially under the influence of climate change, our partnership aimed to develop an advanced frost detection mechanism powered by AI , enabling proactive strategies to mitigate potential losses. Opportunities The primary objective of this project was to conduct a proof of concept (PoC) to verify the ability to identify unexpected production changes, specifically those caused by frost. We analyzed data from the Monitora and Zeus systems — tools designed for real-time monitoring and data collection in forest operations — focusing on critical areas such as geographic intelligence and forest asset management, utilizing AI to manage the inherent unpredictability of this field. Project Stages Our project adhered to the CRISP-DM (Cross-Industry Standard Process for Data Mining) framework, a widely recognized methodology for data mining and machine learning projects. This structured approach guided us through six key phases, each contributing to the successful development of our AI-driven PoC. Business Understanding: We thoroughly understood the business objectives and PoC goals. Collaborating with stakeholders, we defined requirements, set key performance indicators (KPIs), and ensured alignment with the operational context. Data Understanding: We assessed the available data to understand its quality and relevance. Although Sentinel-2 satellite images were considered, they were ultimately deemed impractical due to complexities and uncertainties. Key variables, particularly meteorological factors like humidity and minimum temperature, were identified as critical for frost prediction. Data Preparation: We focused on ensuring the dataset’s quality, selecting the most relevant columns and applying interpolation to create a comprehensive data area. This preparation was crucial for feeding the model with robust inputs. Modeling: In this phase, we built the PoC’s analytical foundation. We selected appropriate algorithms, split data into training and testing sets, and continuously optimized performance using accuracy and recall metrics. Evaluation: A thorough analysis of the model’s performance was conducted, ensuring it met the business understanding criteria. Collaboration with the client’s experts helped validate predictions and refine the model as needed. Implementation/Delivery: The final phase involved implementing and delivering the PoC’s results. We worked closely with the client to validate and refine outcomes, ensuring the model aligned with both business and technical goals. Project Results The project led to significant advancements in frost detection for our pulp sector partner through a series of five experiments. Leveraging AI and machine learning models, we developed a highly accurate classifier that reliably distinguishes frost events from other anomalies. The experimentation process, involving several iterations, was vital in optimizing the model’s performance. Key Results: Enhanced Frost Detection Accuracy: The final model achieved approximately 98% accuracy in identifying frost occurrences, making it highly reliable. Refined Feature Selection: Meteorological factors, particularly minimum temperature and humidity, were identified as the most crucial variables for predicting frost, streamlining the model for better performance. Consistent Historical Data Performance: The model consistently identified frost events when applied to historical data , avoiding false positives during periods without frost. Improved Data Handling: Unsupervised learning in Experiment 5 effectively addressed challenges in labeling large datasets, enhancing the model’s robustness. Operational Impact: The AI-driven frost detection model enables proactive decision-making, helping the company mitigate potential losses from frost damage, thereby reducing operational risks and improving resource management. Advanced AI Solutions Using precise data classification, our team developed an advanced AI model capable of identifying frosts with notable accuracy. One particular experiment stood out, achieving 98% accuracy in tests and consistently validating the model’s effectiveness with previously unclassified historical data. The chosen Random Forest model, known for its efficiency and ease of training, provides a robust solution for precise frost identification. Projected Benefits Early identification of at-risk areas within forest assets Optimized resource allocation based on predictive analytics Improved accuracy in weather forecasting, enhancing operational decision-making Automation of processes leading to increased efficiency and significant cost reductions Strengthened technological innovation, reinforcing the company’s competitive market position Future Perspectives In conclusion, the Proof of Concept demonstrates the feasibility of using data from Monitora and Zeus systems, combined with AI and machine learning algorithms, to detect changes caused by frost. This approach significantly enhances forest asset management, particularly in a field as unpredictable as forestry under climate change. Looking ahead, there are opportunities to further refine the model by exploring more complex data characteristics, integrating additional relevant information, and expanding its capacity to recognize a broader range of climate anomalies.
By e-Core August 12, 2024
The challenge The client, a leading company in risk management for the transportation and logistics sector, faced the challenge of efficiently managing and prioritizing a vast number of workflows. Their system handled approximately 25 million events daily, with about 2 million workflows per month and 350,000 shipments tracked monthly. Operators managed 100 to 200 shipments simultaneously, but only 1 to 2% of these shipments resulted in the incidents they aimed to prevent. The challenge was to use machine learning to predict which shipments were at the highest risk and prioritize them accordingly, thereby improving operational efficiency and reducing the workload on human operators. The solution: machine learning for risk prediction Machine learning techniques were applied to predict risks and prioritize workflows based on the likelihood of critical events. The solution involved identifying the types of incidents that should be prioritized and selecting appropriate algorithms considering factors like model complexity, problem-solving capacity, and predictive effectiveness. The problem was addressed using an ensemble method — a machine learning technique that builds a sequence of decision tree models and combines their outputs to create a robust and accurate model. Given the naturally imbalanced nature of the problem, machine learning techniques were used to correct data distribution and minimize the impact on predictions. The strategic focus was on minimizing false negatives (shipments with critical risks that the model incorrectly predicts as safe), as untreated incidents could lead to financial losses. The approach allowed for an increase in false positives (shipments without critical risks that the model predicts as risky), ensuring that more potential incidents were captured in the workflow management system. The Results The implementation of the machine learning model significantly improved the client’s ability to predict and prioritize high-risk shipments. The model demonstrated a high capacity to distinguish between shipments with and without critical risks, achieving an AUC of 94%, indicating reliability and effectiveness in risk prediction. The false negative rate was 11.5%, a manageable figure given the challenge of identifying incidents. The model correctly classified 88.5% of shipments with critical risks, proving its efficiency in the task. These results reflect the model’s effectiveness in making precise risk predictions, enhancing the reliability of its applications in logistics risk management. 3. Results High Predictive Accuracy: The model achieved a 94% AUC, indicating strong discriminatory power in predicting risks. Reduction of False Negatives: The false negative rate was reduced to 11.5%, allowing better focus on high-risk shipments. Operational Efficiency: Prioritizing shipments with predicted risks reduced the operational burden on human operators, enabling them to concentrate on the most critical cases.
By e-Core November 8, 2023
Discover how a top 10 paper producer reduced costs and downtime through predictive maintenance with the help of e-Core’s Orbit AI The Challenge One of the world’s top 10 largest paper producers was not fully utilizing its production potential due to high costs and the unpredictability of machinery maintenance. Every six years, they used to carry out minor maintenance, which cost approximately $3.4 million. Major maintenance, costing around $7 million, took place every 12 years. Additionally, the client had to interrupt operations for 10 days every year due to regulatory requirements. Since they couldn’t predict which machine would need maintenance in advance, the manufacturer had to request urgent (and consequently more expensive) maintenance services, leading to unexpected downtime and compromising operational efficiency. They had previously tried to solve the problem by adopting a third-party managed solution for some of their machines. Still, the high costs hindered its scalability across the entire factory. With Artificial Intelligence (AI) and Machine Learning (ML) on the radar of CIOs in the manufacturing industry ( Gartner, 2023 ), it made sense to seek a trustworthy, knowledgeable, and experienced IT consulting and services provider to help them tackle this major challenge with new technology. The company partnered with e-Core to identify minor anomalies in equipment and conduct more precise maintenance during the annual 10-day mandatory shutdown, thus extending the interval for major maintenance. The solution With the help of e-Core’s Orbit AI solutions, the project began with thorough data exploration. Leveraging AWS SageMaker , we created a highly scalable solution to serve multiple factory machines, allowing for tailored predictive models for each of them. This approach improved cost efficiency as the client only has to pay for prediction moments and associated compute instance usage, eliminating the need for expensive, non-scalable third-party contracts. Data preprocessing was carried out to prepare the data for the DeepAR model. The selection process was based on the mean absolute error metric, leading to the choice of the best-performing configuration. Subsequently, a deep learning architecture employing the DeepAR forecasting algorithm was deployed to forecast multiple time series, leveraging data from 92 distinct time series acquired from various factory sensors. A fine-tuning process systematically explored different hyperparameters to optimize model performance. After training, we developed a flexible pipeline to facilitate future model creation for different sensors and machines. Although connecting and automating each step in the process was quite a complex task, our team was able to design a scalable pipeline for future predictions successfully. Moreover, the pipeline offers the capability for model retraining, significantly improving forecast precision over time. e-Core acknowledges that machine learning models can benefit from continuous improvement, and the pipeline implemented is equipped to support this iterative process. Learn more about Orbit AI The results The benefits of predictive maintenance are nothing short of transformative. Imagine the machinery in your facility running smoothly, without interruptions, and with cost savings that bolster your bottom line. The first advantage lies in increased operational efficiency, where downtime is rare. The ability to foresee future sensor data elevates machine maintenance to a new level and provides a valuable gift – an extended lifespan for machines. This led to a remarkable reduction in downtime, an uptick in productivity, and substantial long-term cost savings. If maintenance every six years can be extended to eight years, and maintenance every 12 years can be extended to every 15 years, cost savings can amount to nearly $260,000 for a single piece of equipment. The impressive accuracy achieved – 80% or higher – in anticipating maintenance needs led to a significant reduction in unplanned downtime, estimated to be around 20% to 40%. The financial impact is equally compelling, with an attractive project ROI: the company forecasts cost savings of almost $850,000 per year for the business units that have embraced predictive maintenance. The project costs approximately $70,000, with an investment payback expected in 6-8 months. 

News

Get more insights in your inbox

Get the latest articles and insights