GPT-4, OpenAI’s newest model of the engine driving Chat-GPT, was introduced recently.
However, again, the company is not revealing how they programmed or trained the large language model generative AI algorithm due to competitive and security concerns.
While the system has again astounded with its abilities, many critics are questioning whether the results are truly generative or merely feats of memory, fruits of the training data that was used.
Another major question is how many GPUs are necessary to run an installation of generative AI on an enterprise’s data. Some indications are to be found in the recent order of some ten thousand GPUs by Elon Musk to deploy generative AI on Twitter. So, we can assume that a very large quantity of GPUs is necessary.
Given the massive interest in these systems from Google, Microsoft, Baidu, NVIDIA and more recently Alibaba, it is likely that demand for GPUs is going to increase dramatically.
Are you interested in deploying generative AI in your company, or do you have customers who could be considering it?
Contact us today about securing your supply of GPUs as soon as possible.