Grizzly AI’s software connects to OpenAI’s large language model (LLM), GPT-4, via an application programming interface (API). This allows the Grizzly AI software to deliver exceptional added value to the power of GPT-4, while controlling security for the firm.
Security guardrails in Grizzly AI prevent any company data or AI outcomes from being used to train GPT-4.
Grizzly AI is a safe, low-touch, easy-to-use entry point for companies of any size to build a force multiplier for the power of generative AI.
The software just works ‘right-out-of-the-box’. No integration time delays. No cost to implement. No training is needed with its exceptional intuitive user interaction, outstanding usability and user experience.
A summary of the key security features of Grizzly AI follows.
Grizzly AI’s software is regularly penetration-tested by the cyber security team of a Big Four firm, together with Grizzly AI’s compliance with other key industry cyber security standards.
Grizzly AI Limited is progressing towards Soc 2 Type 2 security compliance and audit procedures with the cyber security team of a Big Four firm.
Similar internal information protection processes for ISO27001 are also underway.
The Grizzly AI software breaks documents or questions up into subsets, or ‘tokens’, to enable the generative AI process to work.
‘Tokens’ are approximately four characters long and are sent to OpenAI’s GPT-4 servers in the US for processing.
Any ‘tokens’ sent to Open AI’s servers are held for a maximum of 30 days, to enable abuse monitoring to be maintained.
This data is not viewed by OpenAI or used to train any of their underlying models.
Client data uploaded to Grizzly AI and the AI responses are all encrypted in motion.
The ‘tokens’ sent by Grizzly AI to OpenAI’s GPT-4 and the responses received are all encrypted in motion.
Any company file uploaded to Grizzly AI is protected within Grizzly AI’s own separate Microsoft Azure database, located in Microsoft’s South East Australia region, in Sydney, Australia.
Both OpenAI and Grizzly AI use Microsoft’s Azure infrastructure. This means that all client data is managed within the Azure platform.
This data is not accessible to the LLM, GPT-4, for training.
Whenever a question is asked of a file or folder within a client’s knowledge base within Grizzly AI, ‘tokens’, or a subset of this data, is sent to OpenAI’s servers in the US, where the AI is executed.
When any generative AI analysis is done on company documents loaded into Grizzly AI, only those company documents are used as a source for generating any outputs. This drastically reduces the potential for ‘hallucinations’.
This is because Grizzly AI is only using the natural language generation (NLG) capability and ‘human’ reasoning of GPT-4’s algorithms to analyse the information, rather than seeking answers outside the scope of the company documents.
In addition, any results of that analysis also provide references to the paragraph, page and source document, thus allowing easy verification that the results are not ‘hallucinations’.
Company administrators of Grizzly AI can, if they wish, prevent any or all users from accessing the wider functionality of GPT-4’s LLM. This restriction significantly reduces the possibility of ‘hallucinations’.
That is, the employee is restricted to only company generative AI tasks using only the company’s own documents. This feature, together with other ‘guardrails’, or controls, helps to ensure that enterprise data is protected.
Any files uploaded to Grizzly AI’s repositories may be automatically deleted in a regular cadence or may be deleted at will by the authorized user.
The ‘prompts’, or user questions of the generative AI, must also be closely guarded. OpenAI’s legal partnership agreement with Grizzly AI ensures that ‘prompts’, or any corporate data, will not be used by them for any training on GPT-4, or on any other OpenAI LLM.
With Microsoft as a 49% owner of OpenAI, any breach of this contractual condition would be irrational, as trust would be broken and OpenAI's revenues would plummet.
Enterprise customers may also elect to maintain an instance of Grizzly AI within their own firewall to further enhance their data governance compliance requirements.
However, it should be noted that information ‘tokens’ must still pass through the corporate firewall to access the GPT-4 LLM.
OpenAI’s LLMs, GPT-3.5 and GPT-4, are widely popular, at present. However, it is very likely that there will be no one generative AI model to rule them all. Exponential progress in generative AI means that winners and losers will change places over time.
Google’s Bard, Meta’s Llama 2 and many open-source upstarts will vie for supremacy. Specialist, or vertical market, generative AI models will rapidly emerge.
The architecture of Grizzly AI’s software application means that it may be ported to any or all of those competing generative AI models.
This means that enterprise customers are protected from shifts in the winners and losers of the generative AI technology race, while preserving user skills and familiarity with Grizzly AI, together with most investment in integrations.
Grizzly AI’s software portability means that customers will, in the near term, be able to choose the ‘best’ generative AI models to use for their purpose or multiple models could be used for different purposes.
A customer could, for example, use a highly secure generative AI model for some applications and use other models that might suit specialised requirements.
This allows businesses to capitalise on the benefit of using multiple generative AI vendor models, to obtain the best mix and match of security and functionality to maximise their benefits.
Grizzly AI has created a walled-garden generative AI solution in partnership with OpenAI, owner of GPT-4. Because firms use their own high-quality information and data, the results are accurate and far superior to a more general use of generative AI models.
The curse of potential ‘hallucinations’ is greatly reduced, primarily because Grizzly is only leveraging the company data and documents uploaded to the Grizzly AI application.