AI a Relatively Gray Picture (today).
Understand how AI works to drive solutions and design products that meet the needs of our clients under a strategic approach and therefore the success of your company.
The idea is to be prepared to understand how AI works.
The term AI was coined in the 1950s, the biggest change has been the increase in the power of AI, many of the algorithms already existed as such, for more than 30 years, this power has allowed algorithms to scale and achieve results never seen before.
The main components of AI focused on machine learning are:
The objective. This is probably the most important element, since for this it is necessary that we focus on the latent problem or need of the public, which we want to solve. Under a Lean Startup approach, it is important that it is a real problem. To do this we ask ourselves:
- What is the task we want to solve and that our tool/algorithm must learn?
- What is your goal? From predicting the behavior of the stock market to knowing when to wake up in the morning.
It’s about what you configure our algorithm or tool to achieve through learning.
It is important to mention that depending on the solution approach we choose, we may have various alternatives available to implement machine learning, no-code tools, tools that require very little code and/or tools in which the solution will need to be programmed to implement it. In this series of articles we will focus on the first tools, without code, that allow the business decision maker to strategically evaluate the solution and not in a technical way.
The tool. There are different types of AI algorithms that are used for different use cases.
How will your algorithm/tool learn?
For example: the type of algorithm used to suggest movies on HBO is different from the one used for Tesla’s self-driving cars.
The data. The AI learns from examples, both past examples and real-time examples, and those data samples are what feed the algorithm to achieve its goal.
For example, if we were trying to predict stock market performance, we would need many diverse data points from various sources, such as interest rate increases or even weather forecasts.
The learning model. The tool/algorithm learns based on the data that we have given it as input elements, so that it emulates the existing reality to make a decision, for non-generative tools today, they work under a domain and a specific problem, examples: recognition of image patterns, analysis of large data sets to make similar decisions, taking as input a set of similar data, which represents previous experience in the decisions made.
The consultation and precision of the results. The consultation and precision of results is when we ask the decision model created about a similar experience and it responds to us based on the classification, with similar answers. How much precision it provides in the results will depend on the similar experiences that we have given as input, a example could be, we have a tool that performs image pattern recognition, image patterns between dogs and cats, the tool intentionally works well when we have fed it a sufficient number of images for each category, large enough to recognize an image . of a different breed of dog, but what if we give it as input an object that is not an animal or a dog, a furry paintbrush, it will recognize it with an accuracy close to the number of images similar to it, i.e. as dog or cat, depending on the number of images similar to the brush, the idea is that we continue training our algorithm so that it can disambiguate the meaning we are looking for.
Heya! I just wanted to ask if you ever have any trouble with hackers? My last blog (wordpress) was hacked and I ended up losing months of hard work due to no data backup. Do you have any solutions to prevent hackers?