Create OpenAPI spec using ChatGPT

If you have never worked with OpenAPI spec then creating one can look like a daunting task but it is actually very easy. You can auto-generate an OpenAPI YAML using ChatGPT. Just follow the below tutorial.

Example Prompt to create OpenAPI spec YAML using ChatGPT:

Using the below request and response create a detailed OpenAPI spec.


Request:
curl --request POST \
	--url https://news-article-data-extract-and-summarization1.p.rapidapi.com/extract/ \
	--header 'X-RapidAPI-Host: news-article-data-extract-and-summarization1.p.rapidapi.com' \
	--header 'X-RapidAPI-Key: d28djsn7cea47a16ddb' \
	--header 'content-type: application/json' \
	--data '{
    "url": "https://techcrunch.com/2022/04/18/web-scraping-legal-court/"
}'


Response:
{
  "url": "https://techcrunch.com/2022/04/18/web-scraping-legal-court/",
  "title": "Web scraping is legal, US appeals court reaffirms | TechCrunch",
  "author": "Zack Whittaker",
  "html": "<div class=\"page\" id=\"readability-page-1\"><div>\n\t\t\t\t<p id=\"speakable-summary\">Good news for archivists, academics, researchers and journalists: Scraping publicly accessible data is legal, according to a U.S. appeals court ruling.</p>\n<p>The landmark ruling by the U.S. Ninth Circuit of Appeals is the latest in a long-running legal battle brought by LinkedIn aimed at stopping a rival company from web scraping personal information from users’ public profiles. The case <a href=\"https://techcrunch.com/2021/06/14/supreme-court-revives-linkedin-bid-to-protect-user-data-from-web-scrapers/\">reached the U.S. Supreme Court</a> last year but was sent back to the Ninth Circuit for the original appeals court to re-review the case.</p>\n<p>In its second ruling on Monday, the Ninth Circuit reaffirmed its original decision and found that scraping data that is publicly accessible on the internet is not a violation of <a href=\"https://techcrunch.com/2020/11/29/supreme-court-van-buren-hacking/\">the Computer Fraud and Abuse Act</a>, or CFAA, which governs what constitutes computer hacking under U.S. law.</p>\n<p>The Ninth Circuit’s decision is a major win for archivists, academics, researchers and journalists who use tools to mass collect, or scrape, information that is publicly accessible on the internet. Without a ruling in place, long-running projects to archive websites no longer online and using publicly accessible data for academic and research studies have been left in legal limbo.</p>\n<p>But there have been egregious cases of web scraping that have sparked privacy and security concerns. Facial recognition startup Clearview AI claims to have scraped billions of social media profile photos, prompting <a href=\"https://techcrunch.com/2021/04/21/data-brokers-bill-wyden-paul-privacy-clearview/\">several tech giants to file lawsuits</a> against the startup. Several companies, including <a href=\"https://techcrunch.com/2019/09/04/facebook-phone-numbers-exposed/\">Facebook</a>, Instagram, <a href=\"https://techcrunch.com/2021/01/11/scraped-parler-data-is-a-metadata-goldmine/\">Parler</a>, <a href=\"https://techcrunch.com/2019/06/16/millions-venmo-transactions-scraped/\">Venmo</a>&nbsp;and Clubhouse have all had users’ data scraped over the years.</p>\n<p>The case before the Ninth Circuit was originally brought by LinkedIn against Hiq Labs, a company that uses public data to analyze employee attrition. LinkedIn said Hiq’s mass web scraping of LinkedIn user profiles was against its terms of service, amounted to hacking and was therefore a violation of the CFAA. LinkedIn first lost <a href=\"https://techcrunch.com/2016/08/15/linkedin-sues-scrapers/\">the case against Hiq</a> in 2019 after the Ninth Circuit found that the CFAA does not bar anyone from scraping data that’s publicly accessible.</p>\n<p>On its second pass of the case, the Ninth Circuit said it relied on <a href=\"https://techcrunch.com/2021/06/03/supreme-court-hacking-cfaa-ruling/\">a Supreme Court decision</a> last June, during which the U.S. top court took its first look at the decades-old CFAA. In its ruling, the Supreme Court narrowed what constitutes a violation of the CFAA as those who gain unauthorized access to a computer system — rather than a broader interpretation of exceeding existing authorization, which the court argued could have attached criminal penalties to “a breathtaking amount of commonplace computer activity.” Using a “gate-up, gate-down” analogy, the Supreme Court said that when a computer or website’s gates are up — and therefore information is publicly accessible — no authorization is required.</p>\n<p>The Ninth Circuit, in referencing the Supreme Court’s “gate-up, gate-down” analogy, ruled that “the concept of ‘without authorization’ does not apply to public websites.”</p>\n<p>“We’re disappointed in the court’s decision. This is a preliminary ruling and the case is far from over,” said LinkedIn spokesperson Greg Snapper in a statement. “We will continue to fight to protect our members’ ability to control the information they make available on LinkedIn. When your data is taken without permission and used in ways you haven’t agreed to, that’s not okay. On LinkedIn, our members trust us with their information, which is why we prohibit unauthorized scraping on our platform.”</p>\n\n\t\t\t</div></div>",
  "text": "\n\t\t\t\tGood news for archivists, academics, researchers and journalists: Scraping publicly accessible data is legal, according to a U.S. appeals court ruling.\nThe landmark ruling by the U.S. Ninth Circuit of Appeals is the latest in a long-running legal battle brought by LinkedIn aimed at stopping a rival company from web scraping personal information from users’ public profiles. The case reached the U.S. Supreme Court last year but was sent back to the Ninth Circuit for the original appeals court to re-review the case.\nIn its second ruling on Monday, the Ninth Circuit reaffirmed its original decision and found that scraping data that is publicly accessible on the internet is not a violation of the Computer Fraud and Abuse Act, or CFAA, which governs what constitutes computer hacking under U.S. law.\nThe Ninth Circuit’s decision is a major win for archivists, academics, researchers and journalists who use tools to mass collect, or scrape, information that is publicly accessible on the internet. Without a ruling in place, long-running projects to archive websites no longer online and using publicly accessible data for academic and research studies have been left in legal limbo.\nBut there have been egregious cases of web scraping that have sparked privacy and security concerns. Facial recognition startup Clearview AI claims to have scraped billions of social media profile photos, prompting several tech giants to file lawsuits against the startup. Several companies, including Facebook, Instagram, Parler, Venmo and Clubhouse have all had users’ data scraped over the years.\nThe case before the Ninth Circuit was originally brought by LinkedIn against Hiq Labs, a company that uses public data to analyze employee attrition. LinkedIn said Hiq’s mass web scraping of LinkedIn user profiles was against its terms of service, amounted to hacking and was therefore a violation of the CFAA. LinkedIn first lost the case against Hiq in 2019 after the Ninth Circuit found that the CFAA does not bar anyone from scraping data that’s publicly accessible.\nOn its second pass of the case, the Ninth Circuit said it relied on a Supreme Court decision last June, during which the U.S. top court took its first look at the decades-old CFAA. In its ruling, the Supreme Court narrowed what constitutes a violation of the CFAA as those who gain unauthorized access to a computer system — rather than a broader interpretation of exceeding existing authorization, which the court argued could have attached criminal penalties to “a breathtaking amount of commonplace computer activity.” Using a “gate-up, gate-down” analogy, the Supreme Court said that when a computer or website’s gates are up — and therefore information is publicly accessible — no authorization is required.\nThe Ninth Circuit, in referencing the Supreme Court’s “gate-up, gate-down” analogy, ruled that “the concept of ‘without authorization’ does not apply to public websites.”\n“We’re disappointed in the court’s decision. This is a preliminary ruling and the case is far from over,” said LinkedIn spokesperson Greg Snapper in a statement. “We will continue to fight to protect our members’ ability to control the information they make available on LinkedIn. When your data is taken without permission and used in ways you haven’t agreed to, that’s not okay. On LinkedIn, our members trust us with their information, which is why we prohibit unauthorized scraping on our platform.”\n\n\t\t\t",
  "length": 3456,
  "description": "The landmark web scraping case was bounced back to the Ninth Circuit by the U.S. Supreme Court.",
  "siteName": "TechCrunch",
  "topImage": "https://techcrunch.com/wp-content/uploads/2022/04/GettyImages-1303427084-reworked.jpg",
  "date": "2022-04-18T19:16:57+00:00",
  "keywords": "",
  "summary": [
    "The landmark ruling by the U.S.",
    "The case reached the U.S.",
    "Supreme Court last year but was sent back to the Ninth Circuit for the original appeals court to re-review the case.",
    "But there have been egregious cases of web scraping that have sparked privacy and security concerns.",
    "When your data is taken without permission and used in ways you haven’t agreed to, that’s not okay."
  ],
  "sentiment": {
    "score": 26,
    "comparative": 0.04684684684684685,
    "calculation": [
      {
        "prohibit": -1
      },
      {
        "trust": 1
      }
    ],
    "postive": [
      "trust",
      "agreed",
      "ability",
      "protect",
      "supreme"
    ],
    "negative": [
      "prohibit",
      "fight",
      "disappointed"
    ]
  }
}

This will generate a file like this:

openapi: 3.0.0
info:
  title: PipFeed News Article Extract and Summarization API
  version: 1.0.0
servers:
  - url: https://api.magicapi.dev/api/v1/
paths:
  /extract:
    post:
      summary: Extract and summarize news article data
      description: |
        This endpoint extracts and summarizes information from a given news article URL.
      requestBody:
        content:
          application/json:
            schema:
              type: object
              properties:
                url:
                  type: string
                  description: The URL of the news article to extract and summarize.
              required:
                - url
      responses:
        '200':
          description: Successful response
          content:
            application/json:
              schema:
                type: object
                properties:
                  url:
                    type: string
                    description: The URL of the news article.
                  title:
                    type: string
                    description: The title of the news article.
                  author:
                    type: string
                    description: The author of the news article.
                  html:
                    type: string
                    description: The HTML content of the news article.
                  text:
                    type: string
                    description: The plain text content of the news article.
                  length:
                    type: integer
                    description: The length of the news article.
                  description:
                    type: string
                    description: A brief description of the news article.
                  siteName:
                    type: string
                    description: The name of the news site.
                  topImage:
                    type: string
                    format: uri
                    description: The URL of the top image associated with the news article.
                  date:
                    type: string
                    format: date-time
                    description: The publication date of the news article.
                  keywords:
                    type: string
                    description: Keywords associated with the news article.
                  summary:
                    type: array
                    items:
                      type: string
                    description: Summarized content of the news article.
                  sentiment:
                    type: object
                    properties:
                      score:
                        type: integer
                        description: The sentiment score.
                      comparative:
                        type: number
                        description: The comparative sentiment score.
                      calculation:
                        type: array
                        items:
                          type: object
                          properties:
                            prohibit:
                              type: integer
                              description: Prohibit sentiment factor.
                            trust:
                              type: integer
                              description: Trust sentiment factor.
                        description: Sentiment calculation factors.
                      positive:
                        type: array
                        items:
                          type: string
                        description: Positive sentiment keywords.
                      negative:
                        type: array
                        items:
                          type: string
                        description: Negative sentiment keywords.

If you have more paths, you can simply ask ChatGPT to update the YAML using the example prompt:

Update the above YAML and add additional paths based on the below requests and responses.

Request:
<Your curl request>


Response:
<Your JSON response>

Last updated