Canvass AI is a Toronto-based artificial intelligence company that specializes in AI solutions for industrial clients. In March, Canvass AI released new interactive AI software designed to empower industrial engineers. This comes amid a wave of interest in new artificial intelligence systems globally.
CCI President Benjamin Bergen sat down with Canvass AI CEO Humera Malik to talk about their technology. This transcript has been edited for length and clarity.
Benjamin Bergen: Thanks so much for taking the time to chat. For starters, can you tell me a bit about the history of Canvass AI, and the original idea that was behind the company that kind of got it started?
Humera Malik: We founded Canvass AI to empower industrial users to take control of their operational data and extract value from it.
I saw that industrials were spending lots on data infrastructure and only a small percentage of that was being used to derive value from that data. The other big challenge was that engineers were unable to connect with their data beyond using spreadsheets.
This is when, I realized that there was an opportunity here. This led to our purpose-built AI software for engineers.
We recently released a new version of it, which we consider a major milestone, as it offers more capabilities to harness the full power of AI for simple and complex problem solving. Just by using it, industrials can upskill their workforce.
One customer told us, the addition of Canvass AI in their daily toolkit has eliminated the need for engineers to spend 30% of their time in data gymnastics and analysis. Now they work on more strategic decision making based on data rather than just statistics.
BB: You mentioned this new software that you recently launched. My understanding is it’s like an interactive system for industrial engineers? Can you tell us a little bit more about it — what it is, and what it does?
HM: We’re very excited about it as the response has been great so far.
As I said, our goal is to empower industrial engineers. This new release provides an intuitive experience to look back into the data and look forward using an AI lens.
We call this user experience, Ready, Set, Go.
Ready is about testing the data and the use case.
Set is about contextualizing the data. Say for example, trying to identify quality problems, process abnormalities, or trace the root cause related to a process anomaly.
Go is when the engineer gets these insights and is ready to start using them to control a process, make a timely decision. They can operationalize these AI use cases and solutions in a matter of hours and days. For example, using simulated lab measurements to reduce invasive testing and accelerate time to impact.
We’ve received great feedback from customers saying how it’s already helping them be more efficient and get back more time in their day. These folks are 120% subscribed in their current jobs, and getting time back in their day is a huge win.
BB: What do you make of the explosion of interest in these sort of generative AI tools? Do you think we’re going to see more of them deployed in different areas of society?
HM: In my opinion, with generative and conversational AI, we must be cautiously optimistic.
The explosion of experiments with it is certainly creating a lot of hype and making people more comfortable using AI. Conversational interfaces make it accessible and available to everyone, and this removes the fear associated with this technology.
This is a positive thing because it can empower people to upskill and create new ways of doing things. Content writers can use these tools to improve their writing, and designers can use them to design better websites.
However, we need to be cautious about how these tools are used. While I’m a big user of generative AI tools and believe that they have taken us way ahead in terms of experience, we must be mindful of their impact on all of us.
BB: I like that framing of it: there are positives, in terms of it being, you know, a useful tool. And really, it allows workers to do so much more.
I think your comment about needing to make sure that we’re balancing some of the potential misuses or harms to society, that’s kind of an interesting piece to focus on. What are your thoughts on that? Is that through regulation? What’s the way to ensure that this doesn’t become a negative force for society?
HM: I agree that it’s important to balance the positives of generative AI tools with the potential harms they can cause. As with the Internet, we have seen both sides of it, and we’re still not clear about how to manage its misuse. Therefore, we need to be upfront about the potential risks and consider implementing regulations to create a safe environment for users, especially for children.
Regulation is not a bad word, and it’s not about being restrictive. It’s about ensuring a safe harbour for everyone. We need to think ahead and identify the different ways in which these tools can be misused and then determine the appropriate regulations or measures to address them. Monitoring and managing the use of these tools is critical, and we cannot wait for negative consequences to occur before acting.
We have learned enough through the evolution of the Internet, smartphones, and applications, and we should know how to regulate the use of generative AI tools properly. By doing so, we can ensure that these tools are used ethically and responsibly, and we can prevent them from becoming a negative force in our daily lives.
BB: Do you think working in an industrial environment makes you more sensitive to safety and the importance of regulation when it comes to new technologies?
HM: Working in an industrial environment makes one more sensitive to safety and the importance of regulation when it comes to new technologies. The whole regulation side in industrial manufacturing, especially in high-risk sectors like oil and gas, is critical, and our customers are extremely sensitive to it. This sensitivity has made us aware of the importance of adhering to safety standards and regulations when using new technologies, including AI.
Our company has developed AI specifically for industrial users, ensuring that it’s not a risk to safety. The question of whether a closed-loop system can be built, a solution not requiring engineer oversight, always comes up, and the answer is always yes, but it’s essential to assess the risks associated with it. The risk in the finance sector, for example, may be high, while the risk in retail may be low. However, on the industrial side, there are lives at stake, and the consequences of failure can be catastrophic. It’s therefore crucial to approach the use of AI in an industrial setting with a deep understanding of the potential risks and the importance of adhering to safety standards and regulations.
The Council of Canadian Innovators is a national business council of more than 150 scale-up technology companies headquartered in Canada. Our members are job-creators, philanthropists and leading commercialization experts in the 21st century digital economy.