A couple of weeks in the past, I ran my commonplace suite of programming assessments in opposition to the free model of the Perplexity.ai chatbot. On the finish of the article, I supplied to run assessments in opposition to the $20/mo Professional model if sufficient of you have been . I did get some requests, so that is what we’re doing right here.
Like most different professional variations, to make use of Perplexity Professional, you must create an account. You possibly can check in utilizing both Google or Apple auth strategies or a SAML sign-in. Alternatively, you may create an account utilizing your electronic mail tackle, which is what I did.
Sadly, the positioning does not appear to present you any approach to set a password or any type of multifactor authentication. You are despatched an electronic mail with a code, and that is it. I do not thoughts getting an electronic mail code, however I am actually disturbed by internet apps relying solely on an electronic mail code with out, no less than, a password. However that is what Perplexity.AI is doing.
The opposite fascinating facet of Perplexity Professional is its cornucopia of AI fashions. As you may see within the picture under, you may select between quite a lot of completely different fashions, based mostly on the type of work you’ve gotten. I selected Default to see what that did with the assessments. After working the assessments, I requested Perplexity Professional what mannequin it used for them, and it advised me ChatGPT GPT-4.
And with that, let’s run some assessments.
1. Writing a WordPress plugin
This problem is a reasonably easy programming job for anybody with a modicum of internet programming expertise. It presents a consumer interface within the administration dashboard with two fields: one is a listing of names to be randomized, and the opposite is the output.
The one actual gotcha is that the listing of names can have duplicates, and moderately than eradicating the additional names, its directions are to verify the duplicate names are separated from one another.
This was an actual, requested operate that my spouse wanted to make use of for her e-commerce web site. Each month, they do a wheel spin and a few individuals qualify for a number of entries.
Utilizing Perplexity Professional’s default mannequin, the AI succeeded in producing a workable consumer interface and useful code, offering each a PHP block and a JavaScript block to manage the textual content areas and the randomization logic.
Listed here are the mixture outcomes of this and former assessments:
- Perplexity Professional: Interface: good, performance: good
- Perplexity: Interface: good, performance: good
- Claude 3.5 Sonnet: Interface: good, performance: fail
- ChatGPT utilizing GPT-4o: Interface: good, performance: good
- Microsoft Copilot: Interface: enough, performance: fail
- Meta AI: Interface: enough, performance: fail
- Meta Code Llama: Full failure
- Google Gemini Superior: Interface: good, performance: fail
- ChatGPT utilizing GPT-4: Interface: good, performance: good
- ChatGPT utilizing GPT-3.5: Interface: good, performance: good
2. Rewriting a string operate
For every take a look at, I open a brand new session with the AI. On this take a look at, I am asking the AI to rewrite a block of code that had a bug. The code was designed to validate the enter of {dollars} and cents, which ought to comprise a sure variety of digits earlier than the decimal level, a attainable decimal level, and two digits after the decimal level.
Sadly, the code I shipped solely allowed integer numbers. After a few consumer studies, I made a decision to feed the code to the AI for a rewrite. My code makes use of common expressions, that are a formulaic manner of specifying a format. Common expressions themselves are enjoyable, however debugging them shouldn’t be.
Within the case of this take a look at, Perplexity Professional did a very good job. The resultant validation code correctly flagged objects that didn’t match the format for {dollars} and cents, permitting as much as two digits after the decimal.
Listed here are the mixture outcomes of this and former assessments:
- Perplexity Professional: Succeeded
- Perplexity: Succeeded
- Claude 3.5 Sonnet: Failed
- ChatGPT utilizing GPT-4o: Succeeded
- Microsoft Copilot: Failed
- Meta AI: Failed
- Meta Code Llama: Succeeded
- Google Gemini Superior: Failed
- ChatGPT utilizing GPT-4: Succeeded
- ChatGPT utilizing GPT-3.5: Succeeded
3. Discovering an annoying bug
This take a look at had me stumped for just a few hours. Earlier than it was a take a look at, it was a bug within the code for an precise product. The issue was that no matter was going flawed wasn’t associated to any apparent logic or language concern.
Being critically pissed off, I made a decision to feed ChatGPT the code in addition to the error dump and ask it for assist. Luckily, it discovered what I had finished flawed and gave me steering on what to repair.
The explanation I am together with this within the set of assessments is as a result of the bug wasn’t in language or logic, it was in data of the WordPress framework. Whereas WordPress is well-liked, framework data is usually thought-about the folklore of a programming surroundings, one thing handed down from developer to developer, moderately than one thing that may be rigorously discovered by a data base.
Nonetheless, ChatGPT, in addition to Perplexity and now Perplexity Professional, did discover the issue. The error was attributable to a parameter calling concern buried within the framework itself. The plain reply, which you would possibly give you strictly by studying the error messages generated by the code, was really flawed.
To unravel it, the AI needed to present a deeper understanding of how all of the techniques work collectively, one thing with Perplexity Professional did efficiently.
Listed here are the mixture outcomes of this and former assessments:
- Perplexity: Succeeded
- Perplexity Professional: Succeeded
- Claude 3.5 Sonnet: Succeeded
- ChatGPT utilizing GPT-4o: Succeeded
- Microsoft Copilot: Failed
- Meta AI: Succeeded
- Meta Code Llama: Failed
- Google Gemini Superior: Failed
- ChatGPT utilizing GPT-4: Succeeded
- ChatGPT utilizing GPT-3.5: Succeeded
4. Writing a script
Effectively, that is fascinating. Perplexity Professional handed this take a look at, however the free model of Perplexity failed once I examined it a few weeks in the past. So, yay!
However let’s dive into this a bit. The problem right here is that I ask the AI to jot down a script that intersects three environments: the Chrome DOM (doc object mannequin), AppleScript (Apple’s native scripting language), and Keyboard Maestro (a really cool Mac automation instrument that is pretty obscure, however to me, mission-critical).
A lot of the AIs failed as a result of they did not have any info on Keyboard Maestro of their data bases and, as such, did not give the required code for the script to do what I wished.
Solely Gemini Superior and ChatGPT utilizing GPT-4 and GPT-4o handed this take a look at till now. In answering the query, Perplexity Professional offered a Professional Search view. As you may see, the Professional Search view did a seek for “Keyboard Maestro AppleScript Google Chrome tabs.” It additionally used the primary Keyboard Maestro discussion board as a supply, which is one of the best supply for getting Keyboard Maestro coding assist.
The outcome was successful.
Listed here are the mixture outcomes of this and former assessments:
- Perplexity Professional: Succeeded
- Perplexity: Failed
- Claude 3.5 Sonnet: Failed
- ChatGPT utilizing GPT-4o: Succeeded however with reservations
- Microsoft Copilot: Failed
- Meta AI: Failed
- Meta Code Llama: Failed
- Google Gemini Superior: Succeeded
- ChatGPT utilizing GPT-4: Succeeded
- ChatGPT utilizing GPT-3.5: Failed
General outcomes
Listed here are the general outcomes of the 4 assessments:
As you may see, Perplexity Professional joins solely ChatGPT with GPT-4 and GPT-4o as having an ideal rating of 4 out of 4 succeeded. After working my assessments, I checked with Perplexity Professional’s AI and it knowledgeable me it used GPT-4 to investigate and reply to my assessments.
Provided that GPT-4/4o is the one AI that nailed all 4 of my assessments earlier than, this is sensible. Up to now, I have never discovered some other mannequin that may absolutely and accurately go all 4 programming assessments.
In case you select Perplexity Professional, I can pretty confidently state that it ought to be capable to do a very good job of serving to you program.
Have you ever tried coding with Perplexity, Copilot, Meta AI, Gemini, or ChatGPT? What has your expertise been? Tell us within the feedback under.
You possibly can observe my day-to-day mission updates on social media. Remember to subscribe to my weekly replace e-newsletter, and observe me on Twitter/X at @DavidGewirtz, on Fb at Fb.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.