SPE
AI in Petroleum Industry: the Current State and the Future
Dr Shahab D. Mohaghegh, a pioneer in the application of Artificial Intelligence and Machine Learning in the Exploration and Production industry, answers Arman Mukhamedyarov’s questions
Shahab D. Mohaghegh, a pioneer in the application of Artificial Intelligence and Machine Learning in the Exploration and Production industry, is a Professor of Petroleum and Natural Gas Engineering at West Virginia University and the president and CEO of Intelligent Solutions, Inc. (ISI). He is the director of WVU-LEADS (Laboratory for Engineering Application of Data Science).
Dr Shahab has more than 30 years of research and development in the petroleum engineering application of Artificial Intelligence and Machine Learning. He is the author of four books (“Shale Analytics”, “Data-Driven Reservoir Modeling”, “Application of Data-Driven Analytics for the Geological Storage of CO2”, and “Smart Proxy Modeling”). He has written over 230 technical papers and carried out more than 60 projects for independents, NOCs, and IOCs.
He is an SPE Distinguished Lecturer (2007 and 2020) and has been featured four times as a Distinguished Author in SPE’s Journal of Petroleum Technology (JPT 2000 and 2005). Dr Mohaghegh is the founder of SPE’s Technical Section dedicated to AI and machine learning (Petroleum Data-Driven Analytics, 2011). He has been honored by the U.S. Secretary of Energy for his AI-based technical contribution in the aftermath of the Deepwater Horizon (Macondo) incident in the Gulf of Mexico (2011) and was a member of the U.S. Secretary of Energy’s Technical Advisory Committee on Unconventional Resources in two administrations (2008-2014). He represented the United States in the International Standard Organization (ISO) on Carbon Capture and Storage technical committee (2014-2016).
– Dr Mohaghegh, what is Artificial intelligence? What it consists of?
– First, let’s identify the definitions of the two words “Artificial” and “Intelligence”. Then putting these definitions together would provide some details about what “Artificial Intelligence” is and why it is used as terminology to define this new technology. The word “Artificial” is a copy of something “Natural”. “Artificial” is made or produced by “Homo Sapiens” rather than occurring naturally. The word “Intelligence” is the ability to acquire and apply knowledge and skills. “Intelligence” is the capacity for logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. Before the term “Artificial Intelligence” did appear, the word “Intelligence” was used to refer to Natural “Intelligence” and Human “Intelligence”.
Therefore, the term “Artificial Intelligence” should not change the definition of Natural “Intelligence” and Human “Intelligence”. Rather it should mean that “Homo Sapiens” are making and producing what is known as Natural “Intelligence” and Human “Intelligence”. In other words, “Artificial Intelligence” is the simulation of Natural “Intelligence” and Human “Intelligence” using machines (computers). Since Natural and Human “Intelligence” is performed through the human brain, then “Artificial Intelligence” mimics the human brain. Therefore, the definition of “Artificial Intelligence” would be the simulation of Natural Human “Intelligence” by mimicking the human brain using machines (computers).
“Artificial Intelligence” is the simulation of human intelligence mimicking the “Human Brain” for Analysis, Modeling, and Decision Making.
– Very often, “machine learning” is mentioned in conjunction with the term Artificial Intelligence. Similarly, we often hear terms like “deep learning”, “reinforcement learning”, and “supervised learning”. Could you please structure all these terms (and any other relevant ones) within the context of artificial intelligence for better understanding?
– Machine Learning is a series of algorithms used to generate “Artificial Intelligence”. Before the generation and use of “Artificial Intelligence”, no data-driven approaches were referred to as “Machine Learning”. Since certain types of “Intelligence”, such as science and engineering, require “Learning”, and “Machine” is the tool for “Artificial” development of “Intelligence”, the development of “Artificial Intelligence” requires the use of “Machine Learning”. Again, first, let’s identify the definitions of the words “Machine” and “Learning”. After that putting these definitions together would provide some details about what “Machine Learning” is and why it was used as a terminology to define this new technology.
The word “Machine” in the context of performing algorithms refers to “Computers”. The word “Learning” means the process of acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences. Since “Machine Learning” algorithms are used to develop and generate “Artificial Intelligence”, then the definition of “Artificial Intelligence” that was mentioned in the previous section makes “Machine Learning” follow the way Natural Human “Intelligence” goes through the learning process. In other words, machines should be exposed to learning the way humans are exposed to learning. However, given that Machines (Computers) are not the same as humans, while “Learning” has the same requirements, it is obvious that Machine Learning cannot be applied in the same fashion.
The idea is to cover how humans learn and then see how computers can learn. In this specific application of “Artificial Intelligence” in Science and Engineering, the human learning of science and engineering should be paid attention to, and then try to find out how machines can learn the same types of items. In that context, let’s first identify how humans learn science and engineering. It is obvious that for humans to learn science and engineering, they are required to go to university for several years. When humans go to university to learn about science and engineering, what are they exposed to? It is entirely clear that professors at the university “Teach” topics of the specific course to students so that they can “Learn”.
In General Intelligence, learning can be done through exposure and experience. Nevertheless, teaching can also be helpful while it may not necessarily be an absolute requirement since “General intelligence” is part of the brain of the Human Species. However, when it comes to science and engineering, “Learning” absolutely requires “Teaching”. Since this is the case in Natural Human “Intelligence”, then it must be used for “Artificial Intelligence” through “Machine Learning”. Let us use the same example when “professors at the university Teach” students so that they can “Learn” the topics while attending some course. What will be the answer to the following question: Is it possible for a professor at a university to “Teach” the topic that she/he is not an expert in? The answer is clear: “NO”.
In order to teach a topic, you must be an expert in that topic and know it very well so that you can find the best way to communicate and teach the essence and details of the topic to the students that have taken the course so that they can “Learn” it. The same is true about “Machine Learning” in the science and engineering application of “Artificial Intelligence”. Domain expertise is an absolute prerequisite for anyone who wants to use “Artificial Intelligence” and “Machine Learning” in order to analyze, model, and solve science and engineering-related problems
Deep Learning is a reference to using multiple hidden layers in Artificial Neural Networks that are Machine Learning algorithms used to develop Artificial Intelligence. In CNN (Convolutional Neural Networks), “Deep Learning” includes several layers using traditional statistics to convolve the input data prior to multiple hidden layers.
Unsupervised Learning means the input data used in Machine Learning algorithms do not include any output or results. The idea of applying unsupervised learning to the provided data is to find information about the data showing what parts are closer and similar to each other in several sections. This is also called classification.
Supervised Learning means that the input data is used to train the Artificial Neural Networks that are Machine Learning algorithms while providing specific output for all the given input data. In supervised learning, the machine learning algorithm aims to learn the specific output resulting from the series of inputs provided during the training process.
– Machine Learning (ML) Engineer and Data Scientist. Are these two titles synonymous with each other (very often they are being used this way) in terms of the role they play? Different sources give different answers to this question.
– In the past couple of decades, Artificial Intelligence has become so interesting, and it was shown that could solve a lot of problems that in the past it could not have been done. Then a lot of people became interested in this topic. There are major differences in how Artificial Intelligence must be used for science and engineering-related items versus general intelligence items.
In general, today, the way these words are used very much refers to the same thing. Nevertheless, as time moves forward, a lot of these issues will become more specific since today anyone who even only looks at the data calls himself/herself a “Data Scientist”.
– In your opinion, where artificial intelligence is finding its best fit within petroleum engineering disciplines of the Upstream sector? In more tangible disciplines (i.e. mechanical) by nature, such as drilling and workover, for example; or in the subsurface disciplines, such as geology, petrophysics, reservoir engineering, and alike, which are characterized by the «intangibility»; else may be in production engineering, which is an «intersect» of subsurface and surface disciplines?
– Based on my experience working with subsurface and surface topics in petroleum engineering, when it is used correctly (not “Hybrid Models”), Artificial Intelligence can provide great results in petroleum engineering. In Petroleum Data Analytics (PDA) which is the application of Artificial Intelligence to petroleum engineering, the idea is to avoid making assumptions, interpretations, and simplifications. Instead, PDA helps to solve all subsurface and surface topics based on facts and realism from actual field measurements. We have generated such technology (including software applications) for AI-based Reservoir Simulation and Modeling, Production optimization from Shale Wells, Drilling, and Surface facilities.
– Historically, Petroleum Engineers have been relying heavily on an “empirical approach” (correlations, interpretations) in their day-to-day work. On the contrary, in artificial intelligence technology, one uses solely “hard”, i.e. measured data.
What is your view on the possibility of artificial intelligence “reshaping” the petroleum engineering practice, thus creating a “paradigm shift”?
– As mentioned in the last topic, the historical approach to solving the petroleum -engineering problem includes assumptions, interpretations, and simplifications. This is actually true about traditional engineering problem solving that uses mathematical equations to model physical phenomena. In artificial Intelligence, that is a complete Paradigm Shift when compared to traditional engineering problem-solving. The idea is to avoid assumptions, interpretations, simplifications, and even biases. When only actual “hard” data is used to solve the problem, then only facts and reality would be the main source of the problem-solving, not “what we think” or “what we like” to happen.
However, when some engineers use “Hybrid Models”, what they are doing is the same thing as what they used to do prior to the “Paradigm Shift”, i.e. prior to Artificial Intelligence. Those who do “Hybrid Models” are the ones that, most probably, first tried to use only actual data, but lack of understanding and expertise in Artificial Intelligence did not make them successful. Then, they generated “Soft Data” that is generated by the mathematical equations (the same way we have been solving physics-based problems traditionally) and combine it with the actual data. Data generated by the mathematical equations already include patterns and correlations. This is the reason that “Hybrid Models” do work, but they have ABSOLUTELY NOTHING to do with Artificial Intelligence.
– With artificial intelligence having a wider spread in the industry, can we expect that the significance of Instrumentation and Control, as the branch of engineering “responsible” for the provision of accurate information coming directly from various sensors, gauges, and other measuring devices, would be ever-growing?
– The fact is that all field measurements include some uncertain and noisy data. Nevertheless, hopefully, the actual field measurements provided to petroleum engineers are not “made up” in a certain fashion, and they all have to do with the way things have been measured. Given the definition of Artificial Intelligence as provided as the response to the first question, Artificial Intelligence tries to work like a human brain.
The same thing that was mentioned (uncertain and some noisy data) is true about the data that the human brain uses all the time. Everything we do is based on facts and reality that we see and hear all the time. Nevertheless, even if the facts include uncertainty and some noise, we can still come up with a correct solution. Imagine if someone throws a ball at a child. The child’s brain does not pick up the actual initial velocity of throwing the ball or the actual angle in exact values but rather in an uncertain and noisy fashion. Nevertheless, the child ends up catching the ball after a few trials and errors. The same will be true about the actual field measurements we use to solve engineering problems using Artificial Intelligence.
– At present, with the invention of AI, many companies in the Upstream sector, those with a long presence in the market and the new ones, to highlight the alternative way of building the models offered by them, use terms like “data-driven”, “physics-based”, and “physics-driven” models.
While the meaning of the “data-driven models” (i.e. built based on data - in its entirety or partially) is self-explanatory, the question arises with the term “physics-based” model. The rationale that these models obey physics laws leads to one logical question: what basis were they built on in the past, when AI was not there, if not “on physics”?
– Unfortunately, many major service companies and almost all vendors use the term Artificial Intelligence and Machine Learning as Marketing Tool rather than scientific tool. The main reason is that they have very little and unrealistic understanding of Artificial Intelligence. This question you have asked clearly shows their problem, and here is the reason.
When these companies use the term “Data-Driven Model”, they think the data in AI is used in the same way as in traditional statistics. Most probably, they have some experience in traditional statistics (since the past century, it has been a commonly applied concept in petroleum engineering), but to create an impression of them applying Artificial Intelligence in generating these particular models, use the word “Data-Driven”. One of their main problems is that they do not realize that Artificial Intelligence is very different from traditional statistics, while in both cases, data is used to solve problems. When they use the term “physics-based” or “physics-driven”, it usually means that they add physics - related data to solve the problem using AI. They use mathematical equations to generate the physics related data. A large amount of data is generated and added to the actual data that they have collected.
– Continuing the topic of modeling: could you please highlight some of the major advancements in modeling with the invention of AI/ML?
– The fact is that Artificial Intelligence can model physical phenomena using actually measured data. It must be noted that the way Artificial Intelligence models physical phenomena is quite different from the traditional engineering approach to modeling physical phenomena. Traditionally, as engineers, we first identify the parameters and variables that are part of the given physical phenomenon and then use mathematical equations to interact with the parameters and variables to model the physical phenomenon in the best possible fashion. Then we ask and look for data that represent the parameters and variables so that they can serve our mathematical equation of the physical phenomenon model.
In Artificial Intelligence, data does not serve our understanding of the physical phenomenon. Instead, data is used to generate a model of the physical phenomenon without mathematical equations. When Artificial Intelligence models physical phenomena based on actual measured data, it will not make any assumptions, interpretations, or simplifications. AI-based (Top-Down) reservoir simulation and modeling is completely different from major service companies, and almost all the vendors claim that they can perform AI-based reservoir modeling. Top Down Reservoir Modeling follows a realistic engineering application of Artificial Intelligence, instead using well-based Decline Curve Analysis using machine learning, and is not using a large amount of data that is generated by mathematical equations (like in Hybrid Models).
– Talking holistically, Dr Mohaghegh, could you please provide some vivid examples where AI potentially could or already did change “the way we always used to do in petroleum engineering?
– Yes, indeed. Good examples are “AI-based (Top-Down) Reservoir Simulation and Modeling” (Top-Down Modeling - TDM) and Smart Proxy Modeling. The SPE book I wrote in 2017 (Data-Driven Reservoir Modeling) explains the basics of Top-Down Modeling. From 2017 till now (2022), the TDM has enhanced significantly. I recently wrote a new book called “Smart Proxy Modeling. Here are the links and pictures of the books:
-
Title: Smart Proxy Modeling – Artificial Intelligence and Machine Learning in Numerical Simulation (ISBN: 978-1-032-15114-4) 2022, Publisher: CRC Press Taylor & Francis Group. LLC
-
Title: Data Driven Reservoir Modeling (ISBN: 978-1-61399-560-0) 2017, Publisher: Society of Petroleum Engineers (SPE) http://store.spe.org/Data-Driven-Reservoir-Modeling-P1054.aspx
Here is a short explanation of “Smart Proxy Modeling” and “Top-Down Modeling”:
Intelligent Solutions.Inc has developed two different applications of Artificial Intelligence in Reservoir Simulation and Modeling. The first application is called “Smart Proxy Modeling” and the second application is called “Top-Down Modeling”.
Smart Proxy Modeling: In this technology, Artificial Intelligence provides a proxy model of the numerical reservoir simulation that is very different from the traditional proxy models used in the past several decades. Smart Proxy Modeling does not simplify or change the main mathematical equation used to build the numerical reservoir simulation or reduce the number of cells originally used in the numerical reservoir simulation. Smart Proxy Modeling does not use traditional statistics to re-generate only a certain part of the numerical reservoir simulation. Smart Proxy Modeling provides more than 95% accurate results of the numerical reservoir simulation for all the cells only in a few minutes on a laptop. Development of Smart Proxy Modeling only requires about 20 to 30 numerical reservoir simulation runs.
Top-Down Modeling: In this technology, Artificial Intelligence provides a reservoir simulation and modeling that is quite different from the traditional numerical reservoir simulation. AI-based (Top-Down) reservoir simulation and modeling does not use mathematical equations for modeling and only uses actual field measurements and avoids assumptions, interpretations, and simplifications.
The numerical reservoir simulation used in the petroleum industry in the past century is a “Bottom-Up” reservoir modeling. An original geological model of the reservoir (Bottom) is developed by the geologists and is used by reservoir engineers to history match the production (Up). The AI-based reservoir simulation is a “Top-Down” reservoir modeling.
The historical production that includes details of every well in the field and all the surface-related operational conditions (Top) is used for Geo-Analytics (AI-based geological modeling) to model the geology of the reservoir (Down) for production forecasting and production optimization.
Unlike many other approaches currently used by petroleum service and vendor companies (Artificial General Intelligence), AI-based (Top-Down) Reservoir Simulation is a full-field model and not a single well-based model. It follows AI-Ethics and avoids using “Hybrid Models” that include data generated through mathematical equations. Top-Down Modeling incorporates the Science and Engineering Application of Artificial Intelligence.
– In TDRM, where the model is built in “reverse” to the traditional approach, the geological part is not the “starting and main” point. What is the significance (or need) of the geological model in the TDRM modeling in this case?
– AI-based (Top-Down) Reservoir Simulation and Modeling (Top-Down Modeling - TDM) literally combines reservoir, wellbore, and surface facilities to model the fluid flow in porous media. TDRM develops a Geo-Analytics that is an AI-based Geological Model. TDRM does that by using the data from the “TOP” that includes all productions, characteristics of all the well-bores in the field, and all the surface operational conditions (choke setting, well-head pressure, flow-line pressure, …) to build the Reservoir Simulation and Model to optimize production through Opex and Capex.
– “All models are wrong, but some are useful” George E.P. Box
Commonly, the geological and reservoir (dynamic) models are built utilizing the preconceived knowledge of their “creators” - the team or individuals.
More often than not, once the ownership of the model, for any reason, changes – the very same model, with the same input data, would be “adjusted”, or even rebuilt by a new team to mirror a “new” understanding of the reservoir. Can we expect that the models built utilizing AI/ML would be insensitive to ‘human bias”?
– That is a wonderful question that is very true from an engineering expertise point of view. This is the main difference between AI-based (Top-Down) Reservoir Simulation and Modeling (Top-Down Modeling - TDM) and traditional numerical reservoir simulation and modeling since TDM is only based on actual measured data and avoids assumptions, interpretations, and simplifications. TDM is only developed from fact and reality, not what we think and believe. The only issue that needs to be mentioned is that the development of TDM requires a lot of expertise in Artificial Intelligence and Reservoir engineering.
– Empirical methodologies in the form of mathematical relations are the industry standards to represent underground processes happening during hydrocarbon extraction. In your long-time experience – did you encounter cases when some of these standard methods were “not working”, or found to be outdated, and instead, alternative, new, and robust AI/ML generated procedures were the ones that were the “best fit” for the old problem. Are there any concrete examples of methodologies currently in use which can easily be replaced?
– Several initial projects that we received from many companies (mainly in the Middle East and in Southeast Asia) to use AI-based (Top-Down) Reservoir Simulation and Modeling (Top-Down Modeling - TDM) had exactly the same results that you had formulated in your question “cases when some of these standard methods were not working”. Once we used TDM instead of the traditional numerical reservoir simulation, it showed that Artificial Intelligence could provide much better and fact-related models for optimizations of the oil production and where to drill the next infill wells to increase oil production. In such projects, TDM was able to provide highly accurate history matching of oil, gas, and water production, while such results were impossible to achieve through numerical reservoir simulations, and those were the main reasons why they contacted us requesting to apply TDM.
An example of optimum infill location that was provided by TDM as it was compared to infill locations of wells found by numerical reservoir simulation was presented in a panel session by a company manager in 2020 ATCE.
– Could you please elaborate more on the Petroleum Data Analytics?
– Petroleum Data Analytics (PDA) includes the science and engineering application of Artificial Intelligence in petroleum engineering. PDA’s objective is petroleum engineering problem-solving and decision-making. PDA will fully control the petroleum industry's future of science and engineering. It is highly important for the new generation of scientists and petroleum professionals to develop a correct and realistic understanding of Artificial Intelligence.
Like the application of this technology in other engineering-related disciplines, Petroleum Data Analytics addresses two major issues that determine the success or failure of this technology in our industry: (a) the differences between “engineering” and “non-engineering” problem solving and decision-making, and (b) how Artificial Intelligence is differentiated from traditional engineering problem solving and traditional statistical analysis.
In the past few years, lack of success or mediocre outcomes of Artificial Intelligence in our industry has been quite common. To a large degree, this has to do with a superficial understanding of Artificial Intelligence by petroleum engineering service companies and vendors and their concentration on marketing schemes rather than science and technology.
– Apart from the managerial decision and technical capabilities, in your opinion, what would be needed to make AI a status quo technology in the petroleum industry?
– Artificial Intelligence is part of the new Scientific Revolution that will change everything in our world in a few decades. When it comes to engineering, specifically “Petroleum Engineering”, the future of our industry would be very much part of Artificial Intelligence. This technology will enhance everything in our industry compared to the last century. Using AI, everything in our industry becomes more realistic and much faster. The same thing will happen to all other industries as well. Remember what happened during “Industrial Revolution” in our world in the mid-18th century. “AI Revolution” would be much more important and much faster.
– In its very simple form, we can say that AI/ML practitioner needs to feed the input data and define the required output. Then the path in-between these two points is “mapped” by the embedded machine learning algorithms themselves.
What potential can concepts, called “No code” and “Low Code” (emphasis on the use of graphical interface, rather than on “hard coding”), have to be integrated into the practice of AI/ML engineers? So that he/she can focus on the core item rather than thinking about how to code it correctly?
– It is important to note the definition of Machine Learning to Answer this question:
“Machine Learning” is the science of making computers act (a) without being explicitly programmed and (b) through using Open Computer Algorithms and Learning from Data.
When someone needs to use machine learning algorithms to solve problems, it does not mean that he/she need to know how to code the algorithm. The reason for this is the large number of software codes that have already been developed and provided to everyone for free. However, it would be important for engineers and scientists to know and understand the mathematical details of all the machine learning algorithms that they use to build AI- based models. The reason for this is that by knowing the mathematical details of all the machine learning algorithms, it will help to use the coded algorithm in a much better way. At the same time, scientists and engineers must understand that the AI approach is very different from the traditional modeling of physical phenomena through mathematical equations. Therefore, they must realize that knowing the details of the mathematical details of all the machine learning algorithms does not mean they become AI experts.
To solve science and engineering-related problems using Artificial Intelligence through Machine Learning Algorithms, unlike traditional statistics, your job is not to use all the data that you receive and then put it as the input to the machine learning algorithm to model the output that is your objective. The correct use of the available “real” and “actual” data (in petroleum engineering, it would be the field measurement data) is to use your petroleum engineering domain expertise to “Teach” the machine learning algorithm about the output. This process in my two decades has proven to me that more than 50% to 60% of time in AI-based modeling in science and engineering will require dealing with the data you have received to teach the machine learning algorithms.
– From a technical point of view, AI is a very powerful technology. Yet, rather unusual for the technical term, the word “Ethics” appears in conjunction with AI. Could you please elaborate more on the significance of AI ethics? And its specifics in relation to the petroleum industry.
– This is correct. Unfortunately, some scientists and engineers think that AI-Ethics does not apply to science and engineering problems using AI. To be exposed to all the details of AI-Ethics and how it applies to science and engineering problem-solving using Artificial intelligence, please refer to the following two articles that I have written on this topic:
AI-Ethics in Engineering; The Bias of Traditional Engineers in AI-based Modeling of Physics (Parts 1 - 2)
Shahab Mohaghegh – September 6, 2021 – Medium.Com
https://shahab-mohaghegh.medium.com/ai-ethics-in-engineering-65ab23af3f76
https://shahab-mohaghegh.medium.com/ai-ethics-in-engineering-437ec07046a6
– “Software Is Eating the World, but AI Is Going to Eat Software” (Jensen Huang)
How do you see the interrelation between “industry standard” software engineering packages and emerging artificial intelligence/machine learning generated products? Would there be a synergy or competition?
– Currently, several AI-based (machine learning) software applications can be used in the Petroleum Industry to develop AI-based (Top-Down) Reservoir Simulation and Modeling and Shale Analytics. Here are the software apps and the links to understand more about them:
1) IMagine – Software Application for “AI-based (Top-Down) Reservoir Simulation and Modeling” - http://www.intelligentsolutionsinc.com/Products/IMagine.shtml
2) IMprove – Software Application for “Shale Analytics” –
http://www.intelligentsolutionsinc.com/Products/IMprove.shtml
– There is a good saying “Software is only as good as the people using it”. By referring to this analogy, what are the requirements for artificial intelligence practitioners to become “good” at using artificial intelligence/machine learning technology?
– That is Absolutely Correct. Using AI software, specifically for Science and Engineering, the user must become exposed to the reality of how AI should be used to solve science and engineering-related problems. It requires a very good amount of experience and research.
One should refer to the question: “how long and how much work is needed to become a petroleum engineer expert?” To become an expert in Artificial Intelligence requires the same amount of experience and hard work.
– What advise could you give to aspiring artificial intelligence practitioners to succeed in their endeavors?
– Study the following items in a very detailed process:
-
History of Artificial Intelligence
-
Correct Definitions of Artificial Intelligence
-
Science & Engineering Application of Artificial Intelligence
-
Modeling Physics using Artificial Intelligence
-
Artificial Intelligence vs Traditional Statistics
-
Ethics of Artificial Intelligence (AI-Ethics)
-
Explainable Artificial Intelligence (XAI)
I usually teach short courses on these topics.
– Thank you for your insightful answers.
Arman Mukhamedyarov, Reservoir Engineer
info@petroleumchronicle.com