The future of programming
Yesterday Google announced the Gemini family of models and their native multi-modal capabilities and performance is quite impressive. They come in three rough sizes (categories): Ultra, Pro and Nano. Pro is currently available in Bard with the other two coming soon. On common benchmarks, with a bit of prompt engineering, Ultra is on par with GPT4 and Pro GPT3.5. Nano is interesting because there are two versions that are small enough to fit on a phone and they announced their intention to incorporate them into Pixel 8 Pro.
One thing about common benchmarks though is that they should be taken as a very rough guide of a model's capability. First the tests themselves are flawed in that there are errors in many if not all of the tests. Imagenet had many misclassified images and these tests have many wrong and problematic answers. Second is that it is hard to tell if there has been any test leakage where some of the test material makes its way into the training data. Google has tried to be very careful about leakage but it is difficult to detect and avoid. And third, it is just a standardized test and does not directly represent your needs. Grades or the SAT may give you an idea of someone's education but they don't tell you if they'll be a good or happy employee or co-worker. Its best to test these models directly with your specific use case.
In sum its great to see a viable competitor to OpenAI and the Nano models throw down the gauntlet to Apple and other device manufacturers.
They also announced AlphaCode 2 which got drowned out a bit but is still very important. (I don't think the name has anything to do with AlphaGo they just seem to like the Alpha"bet".) AlphaCode 2 is a system that "leverages several Gemini-based models" specifically configured and developed to write code that when evaluated on a competitive programming platform performed at an estimated ~85th percentile. That is, it performed better than 85% of human competitive programmers.
An interesting side question is: Are competitive programmers more skilled in general than non-competitive programmers?. They may be since people seem to seek out the activities where they naturally excel and that has even more interesting implications but that's not the main point.
What I am trying to get at is that the programming profession is on the front lines of this AI revolution and is going to undergo a major change. But I think that is OK. It has constantly changed from day one. We no longer write programs in binary code. Assemblers changed that job. We no longer write in assembly. Compilers and interpreters changed that job. AI will change the job from someone that writes code to someone that prompts and works with the system to write the code.
Many of us are concerned about AI bringing about job loss. I've heard of students not going into computer science because we won't need programmers in the future. Every technology has changed the job landscape. We no longer have ice delivery men or ice factories. I don't know if the number of programming jobs will be a net loss or not but programming is in the cross-hairs and may be a harbinger of what is to come.
However, I could be wrong but I'm not overly worried. Natural language is a great advancement but not a panacea. The important part of programming is extracting and understanding requirements, selecting and configuring sub-modules, finding and addressing edge-cases and managing change. That will continue to be important. We just may be prompting our systems and tracking down the issues they create rather than the issues we create.
Besides, AlphaCode 2 is so large that it is not commercially viable so is not generally available ... yet.