• Industry News

    With the Rise of Chatbots, Security Concerns a Constant Issue to Tackle

    Nearly every hot new technology trend is immediately followed by security questions & online safety. And the true test of how tight the security is on any given platform is how well it sends & receives data and whether or not that data can be intercepted.

    818cafa8-f0b0-49a2-8063-5974ec9611c6Chatbots operate on internet security protocols that have been in place for years. As with any internet technology, the most important concern is to make sure conversations and information shared between two parties are completely secure, and retain privacy against third-party interception or manipulation.

    “Every organization looking into chatbots as a way to enhance their customer experience is taking a long, hard look at how important it is to make sure every possible security concern is addressed,’ said DISYS Director of Process Automation Design Naveen Vijay.

    Since chatbots are mainly built as Artificial Intelligence (AI) tools on top of existing communication systems (think Facebook Messenger, as it is the most popular bot platform at this time), there is already a cyber fortress built around the communication between users. But with the sensitive data, chatbots can now obtain through user interaction – such as Personally Identifiable Information (PII) – an additional layer of security is needed – as bots can act as intermediaries for purchases, making them carriers of valuable credit card and financial information.

    If a chatbot is deployed on an individual organization’s server (for instance, as an enterprise tool built within an established CRM), it is considered a private bot and is less vulnerable to PII leaks. But if it is on a public channel, and used by anyone and everyone, the bot inherently contains the security protocols put in place by the host, but further encryption is needed for secure transmission of PII.

    Many bot platforms are currently testing and deploying end to end encryption to tackle these concerns. They are taking these steps to not only protect the information of their users, but also to shore up vulnerabilities chatbots could punch into the host through its system-to-system communications.

    “Any time you add another layer onto existing communication platforms, there is a level of complexity added to the way in which platforms keep their users and their information safe,” Vijay said.

    By their very function, chatbots collect information regarding users so the AI can use this information to personalize and adapt the conversation to the whims of the user. This could be through accessing a pre-determined profile or by a user answering key questions the bot might pose. But how long this information is stored and how it is used beyond the human/bot is a concern that must be addressed before a chatbot is deployed into any environment.

    In many industries – particularly healthcare and finance — regulations are in place which set mandates with regards to the gathering of information through electronic means. But others aren’t as regulated when it comes to personal information and these serious questions are left up to developers and business stakeholders.

    “How, when and where a customer’s PII is kept after a transaction is an important risk most consumers don’t think about,” Vijay aid. “This goes beyond your saved settings in Facebook or in Amazon. This is thinking through what information chatbots can absorb to further its knowledgebase and how long that data can be accessed to improve its function.”

    Where this information is stored is also a high-value question needing consideration when it comes to bot data collection. Beyond the overall security of storing valuable information, where this data resides is extremely relevant to how the bot performs — as when storage reaches capacity, the functionality of the bot could be severely compromised. Extreme security standards should be achieved before such information is gathered, much less stored and data purge protocols might be necessary.

    “This very concern is why many developing bots start out simple and build upon it as relevant security concerns are solved,” Vijay said. “Tackling these concerns is unlike anything most have seen to date when it comes to complexity.”

    Finally, as bots grow in popularity and in their functional use, it is only natural for their threat potential to also continually evolve. Consider the rise of email and how spam, phishing scams and more became a daily occurrence everyone had to learn to avoid when clicking through an inbox.

    It is not far-fetched to think chatbots will evolve into online ‘criminals’ deployed by humans with unsavory intentions. Chatbots might even pose more of a deceptive threat when you take into account their natural language capabilities & AI – where interactive conversation is a natural feature of the technology. Chatbots could easily entice a person to click on a corrupt link or divulge pertinent information that is either damaging personally or to a business.

    “With each new technology and each new feature set rolled out, new threats emerge and those who specialize in the tech have to develop brick walls to prevent these threats from advancing,” Vijay said.

    As chatbots and AI are poised to change business the way we know it, the security threats that come along with the technology’s growing pains will pose a never-ending obstacle to those developing the technology. And as chatbots mature, so will creative ways to circumvent newly developed security – making it all the more important for security and IT teams to be diligent in their attention to all AI deployments.

    “There is major enthusiasm to be first in launching chatbots throughout a multitude of industries and it is critical for organizations not get ahead of themselves and cut corners on the bot’s development,” Vijay said. “Security has to be thought through and is an evolving process that will need constant attention as users begin to depend on the technology to perform more and more tasks.”

    About DISYS:
    Digital Intelligence Systems, LLC (DISYS) is reinforcing its commitment to helping clients Accelerate Productivity with its expanded, reimagined Automation Center of Excellence (ACE).

    The Automation Center of Excellence is a development and services hub creating custom client solutions around Robotics Process Automation (RPA) and Scripting & Automation Testing best practices. ACE’s core purpose is to help clients reduce cost while increasing productivity and reaching key business goals in a more efficient, timely manner. ACE is home to Automation experts and industry thought leaders who are committed to reinventing the way businesses perform daily tasks.