← research-and-report

Run Detail

caf8d352-22aa-4859-a67c-e86a248a51a6
success

Started

2026-03-01 22:55:42

Finished

2026-03-01 22:56:25

Steps

s1 web_search success 2026-03-01 22:55:42 → 2026-03-01 22:55:43
Input (47 bytes)
[
  {
    "query": "X API pay-per-use cost calculator"
  }
]
Output (2386 bytes)
[
  {
    "link": "https://docs.x.com/x-api/getting-started/pricing",
    "snippet": "The X API uses pay-per-usage pricing. No subscriptions—pay only for what you use. View pricing \u0026 purchase credits ...",
    "title": "Pricing - X - X Developer Platform"
  },
  {
    "link": "https://docs.x.com/x-api/getting-started/about-x-api",
    "snippet": "The current version of the X API with modern features and flexible pricing.Why use v2: Pay-per-usage pricing; Modern JSON response format; Flexible fields and ...",
    "title": "About the X API"
  },
  {
    "link": "https://www.getxapi.com/twitter-api-cost-calculator",
    "snippet": "Twitter API Cost Calculator. Enter your monthly volume to estimate spend and compare GetXAPI per-call pricing against official X API pay-per-use rates.",
    "title": "Twitter API Cost Calculator - GetXAPI"
  },
  {
    "link": "https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/",
    "snippet": "X provides an interactive API cost calculator where you can input your expected usage patterns and see exactly what you'd pay. X ...",
    "title": "How to Get X API Key: Complete 2026 Guide to Pricing ... - Elfsight"
  },
  {
    "link": "https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476",
    "snippet": "We're thrilled to officially announce the launch of our new X API Pay-Per-Use pricing model! This update is designed to empower the heart of our ...",
    "title": "Announcing the Launch of X API Pay-Per-Use Pricing"
  },
  {
    "link": "https://www.techbuzz.ai/articles/x-tests-pay-per-use-api-model-to-win-back-developers",
    "snippet": "X's new API calculator lets developers estimate costs upfront, a transparency move that stands in stark contrast to the all-or-nothing tiers ...",
    "title": "X Tests Pay-Per-Use API Model to Win Back Developers"
  },
  {
    "link": "https://pricepertoken.com/subscription-calculator",
    "snippet": "Your estimated API cost is $10.20/mo compared to ChatGPT Plus at $20.00/mo. The API gives you full flexibility with no rate limits. Note: Subscriptions include ...",
    "title": "Subscription vs API Cost Calculator - ChatGPT Plus \u0026 Claude Pro vs ..."
  },
  {
    "link": "https://scrapecreators.com/blog/twitter-s-pay-per-use-api-could-this-finally-kill-the-scraping-economy",
    "snippet": "Better Cost Control: Instead of paying for unused quota or being locked into expensive tiers, users pay only for what they actually consume.",
    "title": "Twitter's Pay-Per-Use API: Could This Finally Kill the Scraping ..."
  }
]
s2 web_search success 2026-03-01 22:55:43 → 2026-03-01 22:55:44
Input (2386 bytes)
[
  {
    "link": "https://docs.x.com/x-api/getting-started/pricing",
    "snippet": "The X API uses pay-per-usage pricing. No subscriptions—pay only for what you use. View pricing \u0026 purchase credits ...",
    "title": "Pricing - X - X Developer Platform"
  },
  {
    "link": "https://docs.x.com/x-api/getting-started/about-x-api",
    "snippet": "The current version of the X API with modern features and flexible pricing.Why use v2: Pay-per-usage pricing; Modern JSON response format; Flexible fields and ...",
    "title": "About the X API"
  },
  {
    "link": "https://www.getxapi.com/twitter-api-cost-calculator",
    "snippet": "Twitter API Cost Calculator. Enter your monthly volume to estimate spend and compare GetXAPI per-call pricing against official X API pay-per-use rates.",
    "title": "Twitter API Cost Calculator - GetXAPI"
  },
  {
    "link": "https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/",
    "snippet": "X provides an interactive API cost calculator where you can input your expected usage patterns and see exactly what you'd pay. X ...",
    "title": "How to Get X API Key: Complete 2026 Guide to Pricing ... - Elfsight"
  },
  {
    "link": "https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476",
    "snippet": "We're thrilled to officially announce the launch of our new X API Pay-Per-Use pricing model! This update is designed to empower the heart of our ...",
    "title": "Announcing the Launch of X API Pay-Per-Use Pricing"
  },
  {
    "link": "https://www.techbuzz.ai/articles/x-tests-pay-per-use-api-model-to-win-back-developers",
    "snippet": "X's new API calculator lets developers estimate costs upfront, a transparency move that stands in stark contrast to the all-or-nothing tiers ...",
    "title": "X Tests Pay-Per-Use API Model to Win Back Developers"
  },
  {
    "link": "https://pricepertoken.com/subscription-calculator",
    "snippet": "Your estimated API cost is $10.20/mo compared to ChatGPT Plus at $20.00/mo. The API gives you full flexibility with no rate limits. Note: Subscriptions include ...",
    "title": "Subscription vs API Cost Calculator - ChatGPT Plus \u0026 Claude Pro vs ..."
  },
  {
    "link": "https://scrapecreators.com/blog/twitter-s-pay-per-use-api-could-this-finally-kill-the-scraping-economy",
    "snippet": "Better Cost Control: Instead of paying for unused quota or being locked into expensive tiers, users pay only for what they actually consume.",
    "title": "Twitter's Pay-Per-Use API: Could This Finally Kill the Scraping ..."
  }
]
Output (1588 bytes)
[
  {
    "link": "https://haystack.deepset.ai/blog/query-decomposition",
    "snippet": "This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach.",
    "title": "Advanced RAG: Query Decomposition \u0026 Reasoning - Haystack"
  },
  {
    "link": "https://www.jetbrains.com/help/youtrack/cloud/search-and-command-attributes.html",
    "snippet": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and ...",
    "title": "Search Query Reference | YouTrack Cloud Documentation - JetBrains"
  },
  {
    "link": "https://dev.to/j12y/query-github-repo-topics-using-graphql-35ha",
    "snippet": "Creating a customized user profile page for GitHub to showcase work projects and make navigation to relevant topics easier.",
    "title": "Query GitHub Repo Topics Using GraphQL - DEV Community"
  },
  {
    "link": "https://www.sprinklr.com/help/articles/faqs-and-advanced-usecases/create-an-advanced-topic-query/646331628ea3c9635cf36711",
    "snippet": "Advanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more. By ...",
    "title": "‎Create an Advanced Topic Query | Sprinklr Help Center"
  },
  {
    "link": "https://learn.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language",
    "snippet": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL).",
    "title": "Understanding the Azure Resource Graph query language - Microsoft"
  }
]
s3 fetch_content success 2026-03-01 22:55:44 → 2026-03-01 22:55:50
Input (3973 bytes)
[
  {
    "link": "https://docs.x.com/x-api/getting-started/pricing",
    "snippet": "The X API uses pay-per-usage pricing. No subscriptions—pay only for what you use. View pricing \u0026 purchase credits ...",
    "title": "Pricing - X - X Developer Platform"
  },
  {
    "link": "https://docs.x.com/x-api/getting-started/about-x-api",
    "snippet": "The current version of the X API with modern features and flexible pricing.Why use v2: Pay-per-usage pricing; Modern JSON response format; Flexible fields and ...",
    "title": "About the X API"
  },
  {
    "link": "https://www.getxapi.com/twitter-api-cost-calculator",
    "snippet": "Twitter API Cost Calculator. Enter your monthly volume to estimate spend and compare GetXAPI per-call pricing against official X API pay-per-use rates.",
    "title": "Twitter API Cost Calculator - GetXAPI"
  },
  {
    "link": "https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/",
    "snippet": "X provides an interactive API cost calculator where you can input your expected usage patterns and see exactly what you'd pay. X ...",
    "title": "How to Get X API Key: Complete 2026 Guide to Pricing ... - Elfsight"
  },
  {
    "link": "https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476",
    "snippet": "We're thrilled to officially announce the launch of our new X API Pay-Per-Use pricing model! This update is designed to empower the heart of our ...",
    "title": "Announcing the Launch of X API Pay-Per-Use Pricing"
  },
  {
    "link": "https://www.techbuzz.ai/articles/x-tests-pay-per-use-api-model-to-win-back-developers",
    "snippet": "X's new API calculator lets developers estimate costs upfront, a transparency move that stands in stark contrast to the all-or-nothing tiers ...",
    "title": "X Tests Pay-Per-Use API Model to Win Back Developers"
  },
  {
    "link": "https://pricepertoken.com/subscription-calculator",
    "snippet": "Your estimated API cost is $10.20/mo compared to ChatGPT Plus at $20.00/mo. The API gives you full flexibility with no rate limits. Note: Subscriptions include ...",
    "title": "Subscription vs API Cost Calculator - ChatGPT Plus \u0026 Claude Pro vs ..."
  },
  {
    "link": "https://scrapecreators.com/blog/twitter-s-pay-per-use-api-could-this-finally-kill-the-scraping-economy",
    "snippet": "Better Cost Control: Instead of paying for unused quota or being locked into expensive tiers, users pay only for what they actually consume.",
    "title": "Twitter's Pay-Per-Use API: Could This Finally Kill the Scraping ..."
  },
  {
    "link": "https://haystack.deepset.ai/blog/query-decomposition",
    "snippet": "This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach.",
    "title": "Advanced RAG: Query Decomposition \u0026 Reasoning - Haystack"
  },
  {
    "link": "https://www.jetbrains.com/help/youtrack/cloud/search-and-command-attributes.html",
    "snippet": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and ...",
    "title": "Search Query Reference | YouTrack Cloud Documentation - JetBrains"
  },
  {
    "link": "https://dev.to/j12y/query-github-repo-topics-using-graphql-35ha",
    "snippet": "Creating a customized user profile page for GitHub to showcase work projects and make navigation to relevant topics easier.",
    "title": "Query GitHub Repo Topics Using GraphQL - DEV Community"
  },
  {
    "link": "https://www.sprinklr.com/help/articles/faqs-and-advanced-usecases/create-an-advanced-topic-query/646331628ea3c9635cf36711",
    "snippet": "Advanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more. By ...",
    "title": "‎Create an Advanced Topic Query | Sprinklr Help Center"
  },
  {
    "link": "https://learn.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language",
    "snippet": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL).",
    "title": "Understanding the Azure Resource Graph query language - Microsoft"
  }
]
Output (130504 bytes)
[
  {
    "content_readable": "The X API uses pay-per-usage pricing. No subscriptions—pay only for what you use.\n\nHow it works\n\nCredit-based\n\nPurchase credits upfront in the Developer Console. Credits are deducted as you make API requests.\n\nPer-endpoint pricing\n\nDifferent endpoints have different costs. View current rates in the Developer Console.\n\nNo commitments\n\nNo contracts, subscriptions, or minimum spend. Start and stop anytime.\n\nReal-time tracking\n\nMonitor usage and costs live in the Developer Console.\n\nEarn free xAI API credits when you purchase X API credits—up to 20% back based on your spend. Learn more\n\nIf you are on a legacy subscription package (Basic or Pro), you can opt in to Pay-per-use pricing directly from the Developer Console. If you’d like to switch back to your legacy plan at any time, you can do so from the settings page within the Developer Console.\n\nDeduplication\n\nAll resources are deduplicated within a 24-hour UTC day window. If you request and are charged for a resource (such as a Post), requesting the same resource again within that window will not incur an additional charge. This means:\n\nRequesting the same Post multiple times in a day counts as one charge\nThe deduplication window resets at midnight UTC\nThis applies to all billable resources (Posts, users, etc.)\n\nDeduplication is a soft guarantee. While it occurs in the vast majority of cases, there may be specific edge cases like service outages that result in resources not being deduplicated.\n\nCredit balance\n\nYour credit balance is displayed in the Developer Console. Credits are deducted in real-time as you make API requests.\n\nMonitor your credit balance regularly to avoid service interruptions. Add credits before your balance reaches zero to ensure uninterrupted API access.Note: It is possible for an account credit balance to go slightly negative. In this case, API requests will be blocked until you add credits to cover the negative balance.\n\nAuto-recharge\n\nEnable auto-recharge to automatically top up your credit balance and avoid service interruptions. Configure this in the Developer Console:\n\nSetting\tDescription\nRecharge amount\tThe amount to add when auto-recharge triggers (e.g., $25)\nTrigger threshold\tAuto-recharge activates when your balance falls below this amount (e.g., $5)\n\nAuto-recharge requires a saved payment method set as your default. You can cancel anytime in the Developer Console or by contacting support.\n\nSpending limits\n\nSet a maximum amount you can spend per billing cycle to control costs. When the limit is reached, API requests will be blocked until the next billing cycle.\n\nOption\tDescription\nSpending limit\tSet a specific dollar amount as your maximum spend per billing cycle\n\nUse spending limits to prevent unexpected charges, especially during development and testing.\n\nFree xAI API Credits\n\nWhen you purchase X API credits, you can earn free xAI API credits based on your cumulative spend during a billing cycle.\n\nTo receive free xAI credits, you must link your xAI team to your X developer account. You can do this by visiting your account settings in the developer console.\n\nHow it works\n\nYour cumulative spend is tracked throughout each billing cycle. As you cross spending thresholds, you unlock higher reward rates. When a new billing cycle starts, your cumulative spend resets to $0.\n\nCumulative spend\tRate\n$0 – $199\t0%\n$200 – $499\t10%\n$500 – $999\t15%\n$1,000+\t20%\n\nThe rate applies to your entire cumulative balance, but you only receive the delta—what’s newly owed minus what was already credited.\n\nExample\n\nSuppose you make several purchases throughout a billing cycle:\n\nPurchase\tRate\tTotal owed\tAlready credited\tYou receive\n$100\t0%\t$0\t$0\t$0\n$100\t10%\t$20\t$0\t$20\n$150\t10%\t$35\t$20\t$15\n$150\t15%\t$75\t$35\t$40\n$250\t15%\t$112.50\t$75\t$37.50\n$250\t20%\t$200\t$112.50\t$87.50\n$1,000\t$200\n\nThis is the same amount you’d receive from a single $1,000 purchase—the order and size of purchases doesn’t affect your total rewards.\n\nMonitoring usage\n\nTrack your API usage programmatically with the Usage endpoint:\n\ncurl \"https://api.x.com/2/usage/tweets\" \\\n  -H \"Authorization: Bearer $BEARER_TOKEN\"\n\n\nThis returns daily Post consumption counts, helping you:\n\nTrack consumption against your budget\nSet up alerts when approaching limits\nIdentify high-consumption endpoints\nGenerate usage reports\n\nEnterprise pricing\n\nFor high-volume access with dedicated support, custom rate limits, and additional features, contact our enterprise sales team.\n\nPay-per-usage plans are subject to a monthly cap of 2 million Post reads. If you need higher volume, consider an Enterprise plan.\n\nNext steps",
    "link": "https://docs.x.com/x-api/getting-started/pricing",
    "snippet": "The X API uses pay-per-usage pricing. No subscriptions—pay only for what you use. View pricing \u0026 purchase credits ...",
    "title": "Pricing - X - X Developer Platform"
  },
  {
    "content_readable": "The X API provides programmatic access to X’s public conversation. Retrieve posts, analyze trends, build integrations, and create new experiences on the platform.\n\nWhat you can do\n\nCapability\tDescription\nRead posts\tSearch, look up, and stream posts in real-time\nPublish content\tCreate posts, replies, and threads\nManage users\tLook up users, manage follows, blocks, and mutes\nAnalyze data\tAccess metrics, trends, and engagement analytics\nBuild integrations\tSend DMs, manage lists, and interact with Spaces\n\nAPI versions\n\nX API v2 (Recommended)\n\nX API v1.1 (Legacy)\n\nEnterprise\n\nThe current version of the X API with modern features and flexible pricing.Why use v2:\n\nPay-per-usage pricing\nModern JSON response format\nFlexible fields and expansions\nAdvanced features: annotations, conversation tracking, edit history\nAll new endpoints and features\n\nGetting started:\n\nSign up at console.x.com\nCreate an app and get credentials\nMake your first request\n\nThe previous version of the X API. Limited support; use v2 for new projects.Still available:\n\nSome media upload endpoints\nLegacy streaming (deprecated)\nSome specialized endpoints\n\nMigrating to v2: See the migration guide for endpoint mapping and data format changes.\n\nHigh-volume access for businesses with advanced needs.Features:\n\nComplete firehose access\nHistorical data backfill\nDedicated support\nCustom rate limits\nCompliance streams\n\nContact enterprise sales →\n\nAvailable resources\n\nThe X API provides access to these resource types:\n\nPosts\n\nSearch, retrieve, create, and delete posts. Access timelines, threads, and quote posts.\n\nUsers\n\nLook up profiles, manage relationships, and access follower data.\n\nSpaces\n\nDiscover live audio conversations and participants.\n\nDirect Messages\n\nSend and receive private messages between users.\n\nLists\n\nCreate and manage curated lists of accounts.\n\nTrends\n\nAccess trending topics by location.\n\nv2 highlights\n\nFields and expansions\n\nRequest only the data you need. Use fields parameters to select specific attributes and expansions to include related objects.\n\ncurl \"https://api.x.com/2/tweets/123?tweet.fields=created_at,public_metrics\u0026expansions=author_id\u0026user.fields=username\" \\\n  -H \"Authorization: Bearer $TOKEN\"\n\n\nLearn more about fields →\n\nPost annotations\n\nPosts include semantic annotations identifying people, places, products, and topics. Filter streams and searches by topic.Learn more about annotations →\n\nEngagement metrics\n\nAccess public metrics (likes, reposts, replies) and private metrics (impressions, clicks) for your own posts.Learn more about metrics →\n\nConversation tracking\n\nEdit history\n\nAccess the edit history of posts, including all previous versions and edit metadata.Learn more about edit posts →\n\nPricing\n\nX API v2 uses pay-per-usage pricing:\n\nBenefit\tDescription\nNo subscriptions\tPay only for what you use\nCredit-based\tPurchase credits, deducted per request\nReal-time tracking\tMonitor usage in the Developer Console\nDeduplication\tSame resource requested twice in 24 hours is only charged once\n\nPay-per-usage plans are subject to a monthly cap of 2 million Post reads. If you need higher volume, consider an Enterprise plan.\n\nView pricing details →\n\nNext steps",
    "link": "https://docs.x.com/x-api/getting-started/about-x-api",
    "snippet": "The current version of the X API with modern features and flexible pricing.Why use v2: Pay-per-usage pricing; Modern JSON response format; Flexible fields and ...",
    "title": "About the X API"
  },
  {
    "content_readable": "Enter your monthly volume to estimate spend and compare GetXAPI per-call pricing against official X API pay-per-use rates.\n\nTweet requests / month\n\nGetXAPI: $0.001 / call\n\nOfficial X: $0.005 / read, $0.010 / write\n\nUser requests / month\n\nGetXAPI: $0.001 / call\n\nOfficial X: $0.010 / read, $0.015 / write\n\nDM requests / month\n\nGetXAPI: $0.002 / call\n\nOfficial X: $0.010 / read, $0.015 / write\n\nEstimated Monthly Cost at Your Volume\n\nTotal input volume: 75,000 requests / month\n\nProvider\tEstimated Monthly Spend\tEffective Cost / 1,000\tPricing Model\nGetXAPI\t$85.00\t$1.13\tPay per call (no caps)\nOfficial X API\t$675.00\t$9.00\tPay per use\n\nYou save $590.00/mo with GetXAPI (87% less than official X API)\n\nSources: official X API pay-per-use pricing from developer.x.com/#pricing. Official X costs above use read rates for comparison; write operations cost more ($0.010–$0.015/request). Pricing last verified on February 9, 2026.",
    "link": "https://www.getxapi.com/twitter-api-cost-calculator",
    "snippet": "Twitter API Cost Calculator. Enter your monthly volume to estimate spend and compare GetXAPI per-call pricing against official X API pay-per-use rates.",
    "title": "Twitter API Cost Calculator - GetXAPI"
  },
  {
    "content_readable": "The X API pricing has dramatically changed since 2023 – free access is effectively gone. This complete guide covers authentication, rate limits, optimization strategies, and real-world use cases for building scalable X integrations with confidence.\n\n3 weeks ago\n\nThe X API (formerly Twitter API) has undergone dramatic changes since Elon Musk’s acquisition in 2023. What was once a free, developer-friendly platform is now a premium service with strict pricing tiers and carefully controlled access levels. For developers building bots, integrating real-time data, or creating social media management tools, understanding the current X API landscape is critical.\n\nThis comprehensive guide walks you through everything you need to know about obtaining X API credentials in 2026, understanding actual costs, and optimizing your implementation for efficiency.\n\nEssential concepts covered:\n\nHow X API pricing evolved from free to paid and the emerging pay-per-use model\nCurrent tiers breakdown and which tier fits your use case\nStep-by-step process to get your API credentials from the Developer Portal\nModern authentication methods and permission scopes\nFive proven optimization strategies to reduce costs and improve performance\n\nLet’s start by understanding where the X API fits into your development workflow and what’s currently available.\n\nThe X API Evolution: What Changed\n\nThe Twitter API has evolved dramatically over the years. Here’s the timeline of major changes:\n\nDate Event Impact on Developers\nOctober 2022 Elon Musk acquires Twitter Speculation about API changes begins\nFebruary 2023 Free API access eliminated Third-party clients (Tweetbot, Echofon) shut down; pricing becomes mandatory\nMarch 2023 Paid tiers introduced ($100, $2,500, $42,000) Entry price jumps 100x; developer ecosystem fragments\nJune 2024 Basic tier pricing doubles to $200/month Increased barrier to entry for indie developers\nOctober 2024 Official rebrand: Twitter → X All documentation and branding updated; confusing for legacy users\nNovember 2025 Pay-per-use pricing beta launches New consumption-based model with $500 developer vouchers for testing\n\nFree access became $200–$5,000/month in four years. Before planning an implementation, understand what the API actually provides and which tier matches your needs.\n\nWhat Can You Build With the X API?\n\nThe X API enables programmatic access to X’s infrastructure—from retrieving data to publishing content to automating responses. Here are the most common applications:\n\nBrand Monitoring \u0026 Social Intelligence\n\nTrack mentions, competitor activity, and trending conversations in real-time. Filtered streams deliver instant alerts when specific keywords or accounts generate activity, enabling teams to respond quickly to brand-relevant events.\n\nContent Scheduling\n\nAutomate posting schedules, manage multiple accounts from a single dashboard, and coordinate content workflows. Agencies and creators use these tools to handle dozens of X accounts without manual login-and-post cycles.\n\nWebsite Content Integration\n\nEmbed live X feeds, individual tweets, and trending topics directly into websites. Publishers keep content synchronized with live X activity without requiring manual updates or outdated embeds.\n\nData Analysis and Research\n\nAccess structured data for large-scale studies, trend analysis, and market research. The API provides historical search, engagement metrics, and user data at volumes that would be impossible to collect manually.\n\nAI \u0026 Sentiment Analysis\n\nFeed real-time X data into machine learning models, language models, and sentiment analysis systems. Applications range from audience monitoring to discourse analysis to predictive analytics.\n\nX API Pricing: The 2026 Tier System\n\nAs of today, X is testing a revolutionary pay-per-use pricing model, but the traditional tier system remains the active standard. Here’s what you need to know about both approaches.\n\n💲 Current Standard Pricing\n\nThe tiered pricing structure consists of three main tiers, each designed for different scales of usage:\n\nTier\tMonthly Cost\tAnnual Savings\tBest For\tKey Capabilities\nFree\t$0\t—\tDevelopment and testing only\t500 posts/month, read-heavy, 1 req per 24hrs on most endpoints, limited endpoint access\nBasic\t$200\t$2,100/year (12.5% savings)\tSmall projects, content monitoring, single app usage\t15,000 read requests/month, 50,000 write requests/month, standard endpoint access\nPro\t$5,000\t$54,000/year (10% savings)\tGrowing applications, full feature set, mission-critical systems\t1,000,000 read requests/month, 300,000 write requests/month, full endpoint access, priority support\nEnterprise\t$42,000+\tCustom pricing\tLarge-scale systems, dedicated infrastructure\tCustom rate limits, SLAs, dedicated support, advanced features, volumetric discounts\n\nWhile Basic is 25x cheaper ($200 vs $5,000), Pro gives you 100x more read capacity and unlocks critical features like full-archive search and real-time filtering. Most companies scale directly from Free → Basic → Pro.\n\n💢 What Changed: The Death of Free Access\n\nThe shift from free to paid access served two purposes: generating revenue from the platform’s data value, and reducing abuse. Free API access enabled spam bots, data scrapers, and malicious automation at scale.\n\nAvailable with Free Tier\n\n500 posts per calendar month (about 16-17 per day)\nRate-limited to 1 request per 24 hours on most endpoints\nNo posting, liking, or engaging – read-only access to public data only\nCannot write posts, create resources, or perform account actions\nNo access to trends, direct messaging, or advanced features\n\nReal-world impact: The Free tier is genuinely only for proof-of-concept work and local development testing. For any production application, you must budget for the Basic tier at minimum ($200/month).\n\n🔮 The New Pay-Per-Use Model (Beta)\n\nIn November 2025, X launched a closed beta for a revolutionary pricing approach: pay only for what you use. Instead of fixed monthly fees, developers in the beta pay individual prices for different API operations – similar to AWS or Google Cloud’s consumption-based billing.\n\nHow Pay-Per-Use Works\n\nThe beta pricing model assigns specific costs to each operation type. For example:\n\nReading a post costs a specific price (varies by operation)\nSearching posts costs more (higher computational load)\nCreating a post has its own rate\nAccessing trends uses a different pricing tier\nDirect messaging has separate pricing\n\nImportant Note: The pay-per-use model is in closed beta as of December 2025. Plan your implementation based on current tier pricing, but monitor the official X Developer Twitter (@XDevelopers) for announcements about broader rollout.\n\nAll developers in the closed beta receive a $500 voucher to experiment before committing to production usage.\n\nPotential Benefits Over Fixed Tiers\n\nNo payment for unused capacity (unlike fixed tier pricing)\nAbility to scale up or down without tier changes\nGranular control over spending per feature\nMore transparent cost attribution\n\nX provides an interactive API cost calculator where you can input your expected usage patterns and see exactly what you’d pay.\n\nX Authentication: How to Prove Your Identity\n\nBefore making any API request, you need to authenticate – prove to X that you’re authorized to access specific data. The X API v2 supports multiple authentication methods, each suited for different scenarios.\n\n🔐 OAuth 2.0 Authorization Code (Recommended for New Development)\n\nOAuth 2.0 is the modern standard for authentication and is recommended for all new development. It’s more secure than legacy approaches and handles both public and private user data.\n\nWhen to Use OAuth 2.0\n\nBuilding new applications from scratch\nWeb applications and mobile apps requiring user login\nAccessing private user data (private lists, draft posts)\nPerforming actions on behalf of users (posting, liking, following)\n\nHow It Works\n\nUser clicks “Sign in with X” in your application\nYour app redirects them to X’s authorization page\nUser grants permissions (you define the scopes requested)\nX returns an authorization code\nYour app exchanges the code for an access token\nYou use this token for API requests on behalf of the user\n\nRequired credentials: Client ID, Client Secret, and redirect URI (configured in your developer app settings).\n\n🔑 OAuth 1.0a User Context (Legacy, Still Supported)\n\nThis older method is still supported but not recommended for new development. OAuth 1.0a authenticates on behalf of a specific user and is primarily useful for legacy applications.\n\nPosted tweets or direct messages on a user’s behalf\nRetrieving a specific user’s private timeline\nManaging user-specific resources\n\nWhy it’s less preferred: More complex to implement, less secure than OAuth 2.0, and X is gradually moving developers toward OAuth 2.0.\n\n👥 Bearer Token (App-Only, Best for Public Data)\n\nBearer token authentication is the simplest approach for accessing public data without user context. Use this when you’re building tools that only need public information.\n\nWhen to Use\n\nSearching for public posts\nRetrieving public user profiles\nAccessing publicly available trends\nBuilding analytics tools for public content\n\nHow it works: Provide your app’s credentials (API Key and Secret), receive a Bearer Token, include the token in API request headers. No user involvement required.\n\nSecurity Best Practice: Store all credentials (API Keys, Secrets, Bearer Tokens) in environment variables or secure configuration files – never hardcode them into your application code. If credentials are exposed, regenerate them immediately in the developer portal.\n\nX API v2: Endpoints and Resource Types\n\nThe X API comes in two versions: v1.1 (legacy, no longer updated) and v2 (current standard). All new projects should use v2, which provides access to endpoints organized by resource type – Posts, Users, Trends, Engagement, and more. Each resource supports specific operations (read, create, update, delete) depending on your tier and permissions.\n\nPosts (Tweets) – The Core Resource\n\nWhat you can do: Retrieve posts, search for posts matching criteria, create new posts, delete posts, access timelines\n\nCommon endpoints:\n\nGET /2/tweets — Lookup specific posts by ID\nGET /2/tweets/search/recent — Search recent posts (last 7 days)\nPOST /2/tweets — Create a new post\nGET /2/users/:id/tweets — Get posts from a specific user\n\nPosts are the foundation of the X API. Almost every use case involves retrieving, searching, or creating posts in some way.\n\nUsers – Profile Information\n\nWhat you can do: Access user profiles, get follower information, search for users\n\nCommon endpoints:\n\nGET /2/users/by/username/:username — Get user by handle\nGET /2/users/:id — Get user by ID\nGET /2/users/:id/followers — Get user’s followers\n\nUser endpoints let you build profiles, track followers, and verify account information without manually visiting X.\n\nEngagement – Likes, Retweets, Replies\n\nWhat you can do: See engagement metrics, track who liked or retweeted posts, manage user engagement\n\nCommon endpoints:\n\nGET /2/tweets/:id/liked_by — See who liked a post\nPOST /2/users/:id/likes — Like a post\nGET /2/tweets/:id/quote_tweets — Get quote tweets (retweets with added commentary)\n\nEngagement endpoints power analytics dashboards and community management tools by tracking interactions and responses to content.\n\nLists – User Collections\n\nWhat you can do: Create and manage curated lists of users, access posts from list members\n\nCommon endpoints:\n\nGET /2/lists — List your lists\nPOST /2/lists/:id/members — Add member to list\nGET /2/lists/:id/tweets — Get posts from list members\n\nLists are useful for organizing accounts and creating targeted feeds without following everyone publicly.\n\nTrends – What’s Happening Now\n\nWhat you can do: Access real-time trending topics and hashtags\n\nCommon endpoints:\n\nGET /2/trends — Get trending topics\nGET /2/users/personalized_trends — Get personalized trending topics for a user\n\nTrends data powers discovery features and helps applications surface relevant conversations happening right now on X.\n\nFiltered Stream – Real-Time Data\n\nWhat you can do: Subscribe to a real-time stream of posts matching your rules, receive notifications as posts are created\n\nCommon endpoints:\n\nGET /2/tweets/search/stream — Connect to filtered stream\nPOST /2/tweets/search/stream/rules — Create or modify stream rules\n\nFiltered stream is powerful for applications that need real-time updates (monitoring brand mentions, tracking specific keywords, etc.) without constantly polling the search endpoint.\n\nDirect Messages – Private Communication\n\nWhat you can do: Send and receive direct messages, manage conversations\n\nCommon endpoints:\n\nGET /2/dm_events — Retrieve direct messages\nPOST /2/dm_conversations/:id/messages — Send a message\n\nDirect message endpoints enable customer support automation and notification systems built on top of X.\n\nNote: Not all endpoints are available on all tiers. Free tier access is heavily restricted. The Basic tier ($200/month) provides access to most commonly used endpoints. Check the official X API documentation to verify endpoint availability for your tier before building features.\n\nRate Limits and Quota Management\n\nThe X API v2 enforces two types of limits: request rate limits (per 15-minute windows) and monthly post consumption limits (tracked across the calendar month).\n\n📨 Request Rate Limits (Per 15-Minute Windows)\n\nDifferent endpoints have different rate limits based on your tier.\n\nEndpoint Example\tFree Tier\tBasic Tier\tPro Tier\nGET /2/users/:id (lookup user)\t1 req / 24 hours\t100 requests / 24 hours\t900 requests / 15 mins\nPOST /2/tweets (create post)\tNot available\tAvailable\tAvailable\nGET /2/tweets/search/recent\tLimited\tAvailable\t450 requests / 15 mins\n\nFree tier uses per-endpoint limits measured in 24-hour windows (very restrictive). Basic and Pro tiers use 15-minute windows, which are much more generous because the window resets frequently.\n\n📊 Monthly Post Consumption Limits\n\nSeparate from request rate limits, search and stream endpoints consume from a monthly “post quota.” Once consumed, you can’t query these endpoints until the next calendar month.\n\nFree tier: 10,000 posts/month\nBasic tier: 500,000 posts/month\nPro tier: 2,000,000+ posts/month\n\nThese limits apply specifically to: recent search, filtered stream, user timelines, and mention timelines.\n\n🚨 What Happens When You Hit a Limit\n\nWhen you exceed a rate limit, X returns an HTTP 429 (Too Many Requests) error response with a Retry-After header indicating how many seconds to wait before retrying.\n\nWhen you exhaust your monthly post quota, X returns a 429 error indicating the quota limit is reached. You’re blocked from querying that endpoint until the next calendar month begins.\n\nBest Practice: Implement exponential backoff and retry logic in your application. When you receive a 429 error, wait the duration specified in Retry-After before retrying. For monthly quota exhaustion, cache your search results aggressively to avoid querying the same data repeatedly.\n\nFive Optimization Strategies: Reduce Costs and Improve Performance\n\nWith limited rate limits and monthly quotas, optimization directly impacts your application’s capability and cost. Here are proven strategies to reduce API consumption.\n\n1. Use Field Selection to Reduce Response Size\n\nBy default, API responses return many fields you might not need. The fields parameter lets you request only specific data.\n\nInstead of:\n\nGET /2/tweets?ids=TWEET_ID\n\nUse:\n\nGET /2/tweets?ids=TWEET_ID\u0026tweet.fields=created_at,public_metrics\u0026expansions=author_id\u0026user.fields=username\n\nThe second request returns only the data you need, resulting in smaller responses and faster processing.\n\n2. Implement Application-Level Caching\n\nCache API responses in your database or cache layer with appropriate TTL values:\n\nStatic content (usernames, display names): 24 hours\nSemi-dynamic content (post text, engagement counts): 6 hours\nReal-time content (trending topics): 30 minutes to 1 hour\n\nReal impact: A dashboard that previously fetched trending posts every 15 minutes can drop to every 2 hours with caching, reducing daily API calls from 96 to 12—an 87.5% reduction.\n\n3. Batch Requests Whenever Possible\n\nSome endpoints accept multiple IDs in a single request.\n\nInstead of 3 separate requests:\n\nGET /2/tweets?ids=ID1 GET /2/tweets?ids=ID2 GET /2/tweets?ids=ID3\n\nUse 1 batch request:\n\nGET /2/tweets?ids=ID1,ID2,ID3\n\nThis reduces your consumption from 3 requests to 1, saving 67% of your quota.\n\n4. Use Backoff and Retry Logic\n\nWhen hitting rate limits or temporary errors, retry with exponential backoff:\n\nWait 1 second before retry 1\nWait 2 seconds before retry 2\nWait 4 seconds before retry 3\nWait 8 seconds before retry 4\n\nThis prevents hammering the API and gives temporary issues time to resolve.\n\n5. Consider Filtered Stream Instead of Polling\n\nInstead of repeatedly asking “Are there new posts matching my criteria?” (polling), subscribe to webhooks where X pushes notifications when matching posts appear.\n\nPolling approach: Check every 5 minutes = 288 checks/day. Most checks return “no new data” (wasted quota).\n\nFiltered stream approach: Receive notification only when data changes. Zero wasted requests. Real-time updates.\n\nCombined Impact: Applying all five optimization strategies together can reduce your API consumption 70-90% compared to unoptimized code. A dashboard consuming 5,000 units daily can drop to 500-1,500 units through optimization alone, without requesting a quota increase.\n\nError Handling: Common Issues and Solutions\n\nUnderstanding common error codes helps you debug and recover gracefully.\n\nError Code\tHTTP Status\tCause\tSolution\nInvalid Request\t400\tMalformed request or missing required fields\tReview request format, ensure all required parameters present\nUnauthorized\t401\tMissing or invalid credentials\tCheck that Bearer Token or OAuth tokens are correct and not expired\nForbidden\t403\tAuthenticated but not authorized (insufficient permissions)\tRequest additional scopes in your OAuth flow, get user re-approval\nNot Found\t404\tResource doesn’t exist (invalid ID, deleted content)\tVerify resource ID is correct and still exists\nRate Limited\t429\tToo many requests within the time window\tImplement backoff, wait for rate limit window to reset (check Retry-After header)\nQuota Exceeded\t429\tMonthly post quota exhausted\tWait until next calendar month, or request quota increase\n\n🔧 Parsing Error Responses\n\nWhen an error occurs, X returns JSON with details:\n\n{ \"errors\": [ { \"message\": \"The `ids` query parameter value is invalid\", \"type\": \"https://api.x.com/2/problems/invalid-request\" } ] }\n\nBest practice: Always wrap API calls in try-catch blocks and log errors to a monitoring system. This helps you identify patterns and debug issues faster.\n\nGet Your X API Key: Step-by-Step\n\nThe process has simplified significantly compared to the old Twitter API, but there are still critical steps:\n\n🔗 Step 1: Create a Developer Account\n\nNavigate to X Developer Portal\nSign in with your X account (or create one)\nComplete developer profile setup\nAwait approval (typically 5-10 minutes)\n\nFirst-time users will see an onboarding wizard that guides you through creating your first Project and App. If you don’t see this, click “Projects \u0026 Apps” in the left sidebar.\n\n📂 Step 2: Create a Project\n\nA Project is a container for one or more Apps. Think of it as a workspace.\n\nIn the Developer Portal, click “Create Project”\nName your project (e.g., “Analytics Dashboard”)\nDescribe your use case\nSelect your access tier (start with Free for testing)\n\nBy default, you’re on the Free tier. To upgrade: Go to the “Products” section in the developer portal → Find the X API v2 card and click “View Access Levels” → Select the tier you want\n\n🔨 Step 3: Create an App\n\nWithin your project, click “Create App”\nChoose an App name (e.g., “Brand Monitor Bot”)\nAccept terms\nGenerate your API keys\n\n🔑 Step 4: Access Your Credentials\n\nNavigate to your app’s “Keys and Tokens” tab. You’ll find:\n\nAPI Key (Consumer Key): A public identifier for your app. Safe to share in source code.\nAPI Secret Key (Consumer Secret): Keep this secure! Never expose it in client-side code or version control.\nBearer Token (for app-only auth): Used for app-only authentication (read-only, no user context needed). Also keep secure.\nClient ID \u0026 Secret (for OAuth 2.0): OAuth 2.0 credentials. Only visible if you enable OAuth 2.0 in your app settings.\n\nCritical Security Warning: These credentials display only once. Copy them immediately to a secure location (password manager, encrypted file, environment variables). Never commit to version control or publish publicly. If exposed, regenerate immediately.\n\nRecommended Tools \u0026 Resources\n\nOfficial X API Documentation: The authoritative source for all endpoints, parameters, and examples.\nRate Limits Reference: Complete breakdown of all endpoint rate limits by tier.\nX Postman Collection: Pre-built API requests for testing in Postman. Eliminates manual endpoint crafting.\nX Developer Community Forum: Connect with other developers, ask questions, report issues.\nX Dev GitHub: Official sample code, SDKs, and libraries for Python, JavaScript, Java, and more.\nClient Libraries: Official and community-maintained SDKs in multiple languages. Saves time vs. raw HTTP requests.\n\nFAQ: Common Questions About the X API\n\nThe Free tier is available but extremely limited (500 posts/month, 1 request per 24 hours on most endpoints). It’s suitable only for development and proof-of-concept work. For production applications, the Basic tier ($200/month) is the practical minimum.\n\nOAuth 2.0 authenticates on behalf of a specific user and grants permission scopes. Bearer token (app-only) authenticates as your application to access public data. Use OAuth 2.0 when users need to login and grant permissions; use Bearer tokens for public data without user involvement.\n\nOAuth tokens don’t expire automatically—they remain valid until explicitly revoked or regenerated. Best practice: regenerate tokens every 90 days for security. If you suspect a token is compromised, regenerate immediately.\n\nYou receive an HTTP 429 response with a Retry-After header. Implement exponential backoff and retry after the specified duration. Your request is rejected, so no quota is consumed for failed attempts.\n\nYes. Submit a quota increase request through the Google Cloud Console. Provide your use case, user count, and realistic usage estimates. Google reviews and approves/denies based on compliance and legitimacy. Quota increases are free.\n\nFree tier: development and testing only. Basic ($200/month): most real-world projects (content monitoring, automation, small applications). Pro ($5,000/month): high-traffic applications, APIs serving many end users. Enterprise ($42k+): mission-critical systems requiring SLAs and dedicated support.\n\nNeed more help? Check the X Developer Documentation or visit the X Developer Community Forum to connect with other developers and get answers from the community.\n\nNext Steps\n\nBuilding with the X API is straightforward once you understand the pricing, rate limits, and optimization strategies. Whether you’re monitoring brand conversations, automating content, or analyzing trends, the API provides everything you need. Start with a small project, implement the five optimization strategies early, and grow from there.\n\nThe difference between a scalable application and one that struggles often comes down to implementation details. Plan thoroughly, optimize aggressively from day one, and your X integration will thrive. Ready to get started? Head to developer.x.com, create your first project, and begin building!\n\nSupport\n\nIf you have read the instructions but still have any questions, you can always contact our support specialists or read articles in the Help Center.\n\nAsk for help\n\nForum\n\nContact Elfsight peers, share your thoughts, and participate in community activities!\n\nJoin us\n\nWishlist\n\nVisit Wishlist to offer features that you need but the Form Builder doesn’t have yet.\n\nShare Your Idea\n\nHi, I’m Kristina – content manager at Elfsight. My articles cover practical insights and how-to guides on smart widgets that tackle real website challenges, helping you build a stronger online presence.",
    "link": "https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/",
    "snippet": "X provides an interactive API cost calculator where you can input your expected usage patterns and see exactly what you'd pay. X ...",
    "title": "How to Get X API Key: Complete 2026 Guide to Pricing ... - Elfsight"
  },
  {
    "content_readable": "Crawler is not allowed!",
    "link": "https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476",
    "snippet": "We're thrilled to officially announce the launch of our new X API Pay-Per-Use pricing model! This update is designed to empower the heart of our ...",
    "title": "Announcing the Launch of X API Pay-Per-Use Pricing"
  },
  {
    "content_readable": "Your premier source for technology news, insights, and analysis. Covering the latest in AI, startups, cybersecurity, and innovation.\n\nHAVE A TIP?\n\nSend us a tip using our anonymous form.\n\nHAVE QUESTIONS?\n\nReach out to us on any subject.\n\n© 2026 The Tech Buzz. All rights reserved.",
    "link": "https://www.techbuzz.ai/articles/x-tests-pay-per-use-api-model-to-win-back-developers",
    "snippet": "X's new API calculator lets developers estimate costs upfront, a transparency move that stands in stark contrast to the all-or-nothing tiers ...",
    "title": "X Tests Pay-Per-Use API Model to Win Back Developers"
  },
  {
    "content_readable": "Price Per Token\n\nIs ChatGPT Plus or Claude Pro worth it? Input your usage and instantly see whether a subscription or pay-per-token API access is cheaper for you.\n\nMessages per Day\n\nAPI Model\n\nMessage Length\n\nLonger prompt, thorough response\n\n800 input + 1500 output tokens/msg\n\nAt your usage, the API saves you $9.80/month\n\nYour estimated API cost is $10.20/mo compared to ChatGPT Plus at $20.00/mo. The API gives you full flexibility with no rate limits.\n\nNote: Subscriptions include features not available via API (web browsing, file uploads, custom GPTs, etc.)\n\nAPI\n\nBest Value\n\n$10.20/mo\n\nGPT-4o via API\n\nPay per token used\n\nNo rate limits\n\nFull API access\n\nChatGPT Free\n\nFree\n\nSaves $10.20/mo vs API\n\nMay exceed rate limits\n\nLimited GPT-4o access\n\nGPT-4o mini\nLimited GPT-4o\nWeb browsing\nLimited file uploads\n\nChatGPT Plus\n\n$20/mo\n\n$9.80/mo more than API\n\nGPT-4o\nGPT-4o mini\no1\no3-mini\n\nChatGPT Pro\n\n$200/mo\n\n$189.80/mo more than API\n\nUnlimited GPT-4o\nUnlimited o1\no1 pro mode\nUnlimited Advanced Data Analysis\n\nAt your usage, Claude Free saves you $14.94/month\n\nClaude Free costs $0.0000/mo vs $14.94/mo for API.\n\nNote: Subscriptions include features not available via API (web browsing, file uploads, custom GPTs, etc.)\n\nAPI\n\n$14.94/mo\n\nClaude Sonnet 4.5 via API\n\nPay per token used\n\nNo rate limits\n\nFull API access\n\nClaude Free\n\nBest Value\n\nFree\n\nSaves $14.94/mo vs API\n\nClaude Sonnet 4.5\nBasic web search\nLimited file uploads\n\nClaude Pro\n\n$20/mo\n\n$5.06/mo more than API\n\nClaude Sonnet 4.5\nClaude Opus 4\nExtended thinking\nProjects\n\nClaude Max 5x\n\n$100/mo\n\n$85.06/mo more than API\n\nEverything in Pro\n5x Pro usage limits\nHigher rate limits on all models\n\nClaude Max 20x\n\n$200/mo\n\n$185.06/mo more than API\n\nEverything in Pro\n20x Pro usage limits\nHighest rate limits\n\nFrequently Asked Questions\n\nCommon questions about subscription vs API pricing\n\nFollow us:\n\n2026 68 Ventures, LLC. All rights reserved.",
    "link": "https://pricepertoken.com/subscription-calculator",
    "snippet": "Your estimated API cost is $10.20/mo compared to ChatGPT Plus at $20.00/mo. The API gives you full flexibility with no rate limits. Note: Subscriptions include ...",
    "title": "Subscription vs API Cost Calculator - ChatGPT Plus \u0026 Claude Pro vs ..."
  },
  {
    "content_readable": "The Twitter API pricing saga has been a wild ride of extremes, and it looks like we might finally be heading toward some middle ground. According to recent announcements, Twitter (now X) is testing a pay-per-usage model that could dramatically reshape how developers and data scrapers interact with the platform.\n\nThe Pendulum Swings Back\n\nTwitter's API pricing history reads like a case study in how not to manage developer relations. The platform started with a completely free API that, while generous, created massive problems with abuse, scraping, and system strain. When Elon Musk took over, the pendulum swung hard in the opposite direction – suddenly, API access became prohibitively expensive for most developers and small businesses.\n\nThe result? A thriving underground economy of scrapers and unofficial API alternatives, along with frustrated developers who were priced out of legitimate access to Twitter data.\n\nPay-Per-Use: The Obvious Solution\n\nThe announcement hints at what many in the developer community have been calling for: a reasonable, pay-as-you-go pricing model. This approach makes intuitive sense for several reasons:\n\nScalability for Everyone: Small developers and researchers can access the API without massive upfront commitments, while larger enterprises pay proportionally for their usage.\nBetter Cost Control: Instead of paying for unused quota or being locked into expensive tiers, users pay only for what they actually consume.\nReduced Scraping Incentive: If official API access becomes affordable, the economic motivation to build and maintain scraping infrastructure diminishes significantly.\n\nThe Scraper's Dilemma\n\nFor those currently running Twitter scraping operations, this development presents an interesting calculation. Scraping Twitter has always been a cat-and-mouse game.\n\nYou're constantly dealing with rate limits, IP blocks, CAPTCHA systems, and constantly changing HTML structures. It's expensive to maintain and inherently unreliable.\n\nIf Twitter prices their pay-per-use API competitively, many scrapers might find it cheaper and more reliable to simply pay for official access. The question becomes: what constitutes \"competitively priced\"?\n\nWhat \"Reasonable\" Might Look Like\n\nFor a pay-per-use model to truly disrupt the scraping economy, it needs to be:\n\nTransparent: Clear pricing with no hidden fees or surprise charges\nGranular: Pay for exactly what you use, whether that's 100 requests or 100,000\nCompetitive: Priced low enough that it's cheaper than building and maintaining scraping infrastructure\nReliable: Stable pricing and terms that developers can build long-term plans around\n\nThe Bigger Picture\n\nThis shift could signal a broader maturation in how social media platforms think about data access. The all-or-nothing approaches of the past – either completely free or prohibitively expensive – haven't served anyone well.\n\nA well-implemented pay-per-use model could:\n\nReduce the technical arms race between platforms and scrapers\nEnable more legitimate research and business applications\nProvide platforms with sustainable revenue from data access\nCreate a healthier ecosystem for developers\n\nImpact on the Scraping Ecosystem\n\nIf Twitter gets this right, it could set a precedent for other social media platforms. The current ecosystem of scraping tools and services exists largely because official APIs are either unavailable, unreliable, or unaffordably priced.\n\nA shift toward reasonable pay-per-use pricing across major platforms could fundamentally change this landscape, potentially making legitimate API access the norm rather than the exception.\n\nLooking Forward\n\nThe scraping community is watching this development closely. Many scraper operators would probably prefer the predictability and reliability of official API access – if the price is right.\n\nFor now, it's a waiting game. The pilot program will provide the first real indication of whether Twitter has learned from their pricing missteps or if we're headed for another swing of the pendulum.",
    "link": "https://scrapecreators.com/blog/twitter-s-pay-per-use-api-could-this-finally-kill-the-scraping-economy",
    "snippet": "Better Cost Control: Instead of paying for unused quota or being locked into expensive tiers, users pay only for what they actually consume.",
    "title": "Twitter's Pay-Per-Use API: Could This Finally Kill the Scraping ..."
  },
  {
    "content_readable": "This is part one of the Advanced Use Cases series:\n\n1️⃣ Extract Metadata from Queries to Improve Retrieval\n\n2️⃣ Query Expansion\n\n3️⃣ Query Decomposition\n\n4️⃣ Automated Metadata Enrichment\n\nSometimes a single question is multiple questions in disguise. For example: “Did Microsoft or Google make more money last year?”. To get to the correct answer for this seemingly simple question, we actually have to break it down: “How much money did Google make last year?” and “How much money did Microsoft make last year?”. Only if we know the answer to these 2 questions can we reason about the final answer.\n\nThis is where query decomposition comes in. This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach:\n\nDecompose the original question into smaller questions that can be answered independently to each other. Let’s call these ‘sub questions’ here on out.\nReason about the final answer to the original question, based on each sub-answer.\n\nWhile for many query/dataset combinations, this may not be required, for some, it very well may be. At the end of the day, often one query results in one retrieval step. If within that one single retrieval step we are unable to have the retriever return both the money Microsoft made last year and Google, then the system will struggle to produce an accurate final response.\n\nThis method ensures that we are:\n\nretrieving the relevant context for each sub question.\nreasoning about the final answer given each answer based on the contexts retrieved for each sub question.\n\nIn this article, I’ll be going through some key steps that allow you to achieve this. You can find the full working example and code in the linked recipe from our cookbook. Here, I’ll only show the most relevant parts of the code.\n\n🚀 I’m sneaking something extra into this article. I saw the opportunity to try out the structured output functionality (currently in beta) by OpenAI to create this example. For this step, I extended the OpenAIGenerator in Haystack to be able to work with Pydantic schemas. More on this in the next step.\n\nLet’s try build a full pipeline that makes use of query decomposition and reasoning. We’ll use a dataset about Game of Thrones (a classic for Haystack) which you can find preprocessed and chunked on Tuana/game-of-thrones on Hugging Face Datasets.\n\nDefining our Questions Structure\n\nOur first step is to create a structure within which we can contain the subquestions, and each of their answers. This will be used by our OpenAIGenerator to produce a structured output.\n\nfrom pydantic import BaseModel\n\nclass Question(BaseModel):\n    question: str\n    answer: Optional[str] = None\n\nclass Questions(BaseModel):\n    questions: list[Question]\n\n\nThe structure is simple, we have Questions made up of a list of Question. Each Question has the question string as well as an optional answer to that question.\n\nDefining the Prompt for Query Decomposition\n\nNext up, we need to get an LLM to decompose a question and produce multiple questions. Here, we will start making use of our Questions schema.\n\nsplitter_prompt = \"\"\"\nYou are a helpful assistant that prepares queries that will be sent to a search component.\nSometimes, these queries are very complex.\nYour job is to simplify complex queries into multiple queries that can be answered\nin isolation to eachother.\n\nIf the query is simple, then keep it as it is.\nExamples\n1. Query: Did Microsoft or Google make more money last year?\n   Decomposed Questions: [Question(question='How much profit did Microsoft make last year?', answer=None), Question(question='How much profit did Google make last year?', answer=None)]\n2. Query: What is the capital of France?\n   Decomposed Questions: [Question(question='What is the capital of France?', answer=None)]\n3. Query: {{question}}\n   Decomposed Questions:\n\"\"\"\n\nbuilder = PromptBuilder(splitter_prompt)\nllm = OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions})\n\n\nAnswering Each Sub Question\n\nFirst, let’s build a pipeline that uses the splitter_prompt to decompose our question:\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question}})\nprint(result[\"llm\"][\"structured_reply\"])\n\n\nThis produces the following Questions (List[Question])\n\nquestions=[Question(question='How many siblings does Jamie have?', answer=None), \n           Question(question='How many siblings does Sansa have?', answer=None)]\n\n\nNow, we have to fill in the answer fields. For this step, we need to have a separate prompt and two custom components:\n\nThe CohereMultiTextEmbedder which can take multiple questions rather than a single one like the CohereTextEmbedder.\nThe MultiQueryInMemoryEmbeddingRetriever which can again, take multiple questions and their embeddings, returning question_context_pairs. Each pair contains the question and documents that are relevant to that question.\n\nNext, we need to construct a prompt that can instruct a model to answer each subquestion:\n\nmulti_query_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nAnd you have split the task into the following questions:\n{% for pair in question_context_pairs %}\n  {{pair.question}}\n{% endfor %}\n\nHere are the question and context pairs for each question.\nFor each question, generate the question answer pair as a structured output\n{% for pair in question_context_pairs %}\n  Question: {{pair.question}}\n  Context: {{pair.documents}}\n{% endfor %}\nAnswers:\n\"\"\"\n\nmulti_query_prompt = PromptBuilder(multi_query_template)\n\n\nLet’s build a pipeline that can answer each individual sub question. We will call this the query_decomposition_pipeline :\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\nquery_decomposition_pipeline.add_component(\"embedder\", CohereMultiTextEmbedder(model=\"embed-multilingual-v3.0\"))\nquery_decomposition_pipeline.add_component(\"multi_query_retriever\", MultiQueryInMemoryEmbeddingRetriever(InMemoryEmbeddingRetriever(document_store=document_store)))\nquery_decomposition_pipeline.add_component(\"multi_query_prompt\", PromptBuilder(multi_query_template))\nquery_decomposition_pipeline.add_component(\"query_resolver_llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"embedder.embeddings\", \"multi_query_retriever.query_embeddings\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"multi_query_retriever.queries\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"multi_query_retriever.question_context_pairs\", \"multi_query_prompt.question_context_pairs\")\nquery_decomposition_pipeline.connect(\"multi_query_prompt\", \"query_resolver_llm\")\n\n\nRunning this pipeline with the original question “Who has more siblings, Jamie or Sansa?”, results in the following structured output:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question}})\n\nprint(result[\"query_resolver_llm\"][\"structured_reply\"])\n\n\nquestions=[Question(question='How many siblings does Jamie have?', answer='2 (Cersei Lannister, Tyrion Lannister)'),\n           Question(question='How many siblings does Sansa have?', answer='5 (Robb Stark, Arya Stark, Bran Stark, Rickon Stark, Jon Snow)')]\n\n\nReasoning About the Final Answer\n\nThe final step we have to take is to reason about the ultimate answer to the original question. Again, we create a prompt that will instruct an LLM to do this. Given we have the questions output that contains each sub question and answer, we will make these inputs to this final prompt.\n\nreasoning_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nYou have split this question up into simpler questions that can be answered in\nisolation.\nHere are the questions and answers that you've generated\n{% for pair in question_answer_pair %}\n  {{pair}}\n{% endfor %}\n\nReason about the final answer to the original query based on these questions and\naswers\nFinal Answer:\n\"\"\"\n\nresoning_prompt = PromptBuilder(reasoning_template)\n\n\nTo be able to augment this prompt with the question answer pairs, we will have to extend our previous pipeline and connect the structured_reply from the previous LLM, to the question_answer_pair input of this prompt.\n\nquery_decomposition_pipeline.add_component(\"reasoning_prompt\", PromptBuilder(reasoning_template))\nquery_decomposition_pipeline.add_component(\"reasoning_llm\", OpenAIGenerator(model=\"gpt-4o-mini\"))\n\nquery_decomposition_pipeline.connect(\"query_resolver_llm.structured_reply\", \"reasoning_prompt.question_answer_pair\")\nquery_decomposition_pipeline.connect(\"reasoning_prompt\", \"reasoning_llm\")\n\n\nNow, let’s run this final pipeline and see what results we get:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question},\n                                           \"reasoning_prompt\": {\"question\": question}},\n                                           include_outputs_from=[\"query_resolver_llm\"])\n\nprint(\"The original query was split and resolved:\\n\")\n\nfor pair in result[\"query_resolver_llm\"][\"structured_reply\"].questions:\n  print(pair)\nprint(\"\\nSo the original query is answered as follows:\\n\")\nprint(result[\"reasoning_llm\"][\"replies\"][0])\n\n\n🥁 Drum roll please:\n\nThe original query was split and resolved:\n\nquestion='How many siblings does Jaime have?' answer='Jaime has one sister (Cersei) and one younger brother (Tyrion), making a total of 2 siblings.'\nquestion='How many siblings does Sansa have?' answer='Sansa has five siblings: one older brother (Robb), one younger sister (Arya), and two younger brothers (Bran and Rickon), as well as one older illegitimate half-brother (Jon Snow).'\n\nSo the original query is answered as follows:\n\nTo determine who has more siblings between Jaime and Sansa, we need to compare the number of siblings each has based on the provided answers.\n\nFrom the answers:\n- Jaime has 2 siblings (Cersei and Tyrion).\n- Sansa has 5 siblings (Robb, Arya, Bran, Rickon, and Jon Snow).\n\nSince Sansa has 5 siblings and Jaime has 2 siblings, we can conclude that Sansa has more siblings than Jaime.\n\nFinal Answer: Sansa has more siblings than Jaime.\n\n\nWrapping up\n\nGiven the right instructions, LLMs are good at breaking down tasks. Query decomposition is a great way we can make sure we do that for questions that are multiple questions in disguise.\n\nIn this article, you learned how to implement this technique with a twist 🙂 Let us know what you think about using structured outputs for these sorts of use cases. And check out the Haystack experimental repo to see what new features we’re working on.",
    "link": "https://haystack.deepset.ai/blog/query-decomposition",
    "snippet": "This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach.",
    "title": "Advanced RAG: Query Decomposition \u0026 Reasoning - Haystack"
  },
  {
    "content_readable": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and relative date parameters that are recognized in search queries.\n\nSeveral references on this page are not available in Simple Search. Switch to Advanced Search to access them.\n\nIssue Attributes\n\nEvery issue has base attributes that are set automatically by YouTrack. These include the issue ID, the user who created or applied the last update to the issue, and so on.\n\nThese search attributes represent an \u003cAttribute\u003e in the Search Query Grammar. Their values correspond to the \u003cValue\u003e or \u003cValueRange\u003e parameter.\n\nAttribute-based search uses the syntax attribute: value.\n\nYou can specify multiple values for the target attribute, separated by commas.\n\nExclude specific values from the search results with the syntax attribute: -value.\n\nIn many cases, you can omit the attribute and reference values directly with the # or - symbols. For additional guidelines, see Advanced Search.\n\nattachment text\n\nattachment text: \u003ctext\u003e\n\nReturns issues that include image attachments with the specified text.\n\nattachments\n\nattachments: \u003ctext\u003e\n\nReturns issues that include attachments with the specified filename.\n\nBoard\n\nBoard \u003cboard name\u003e: \u003csprint name\u003e\n\nReturns issues that are assigned to the specified sprint on the specified agile board. To find issues that are assigned to agile boards with sprints disabled, use has: \u003cboard name\u003e.\n\ncc recipients\n\ncc recipients: \u003cuser\u003e\n\nReturns tickets where the specified users are added as CCs.\n\ncode\n\ncode: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words that are formatted as code in the issue description or comments. This includes matches that are formatted as inline code spans, indented and fenced code blocks, and stack traces.\n\ncommented: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues to which comments were added on the specified date or within the specified period.\n\ncommenter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were commented by the specified user or by a member of the specified group.\n\ncomments: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in a comment.\n\ncreated\n\ncreated: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that were created on a specific date or within a specified time frame.\n\ndescription\n\ndescription: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in the issue description.\n\ndocument type\n\ndocument type: Issue | Ticket\n\nReturns either issue or ticket type documents.\n\nGantt\n\nGantt: \u003cchart name\u003e\n\nReturns issues that are assigned to the specified Gantt chart.\n\nhas\n\nhas: \u003cattribute\u003e\n\nThe has keyword functions as a Boolean search term. When used in a search query, it returns all issues that contain a value for the specified attribute. Use the minus operator (-) before the specified attribute to find issues that have empty values.\n\nFor example, to find all issues in the TST project that are assigned to the current user, have a duplicates link, have attachments, but do not have any comments, enter in: TST for: me has: duplicates , attachments , -comments.\n\nYou can use the has keyword in combination with the following attributes:\n\nAttribute\n\nDescription\n\nattachments\n\nReturns issues that have attachments.\n\nboards\n\nReturns issues that are assigned to at least one agile board. When used with an exclusion operator (-), returns issues that aren't assigned to any boards.\n\nBoard \u003cboard name\u003e\n\nReturns issues that are assigned to the specified agile board.\n\ncomments\n\nReturns issues that have one or more comments.\n\ndescription\n\nReturns issues that do not have an empty description.\n\n\u003cfield name\u003e\n\nReturns issues that contain any value in the specified custom field. Enclose field names that contain spaces in braces.\n\nGantt\n\nReturns issues that are assigned to any Gantt chart.\n\n\u003clink type name\u003e\n\nReturns issues that have links that match the specified outward name or inward name. Enclose link names that contain spaces in braces.\n\nFor example, to find issues that are linked as subtasks to parent issues, use:\n\nhas: {Subtask of}\n\nTo find issues that aren't linked to a parent issue, use:\n\nhas: -{Subtask of}\n\nlinks\n\nReturns issues that have any issue link type.\n\nstar\n\nReturns issues that have the star tag for the current user.\n\nunderestimation\n\nReturns issues where the total spent time is greater than the original estimation value.\n\nvcs changes\n\nReturns issues that contain vcs changes.\n\nvotes\n\nReturns issues that have one or more votes.\n\nwork\n\nReturns issues that have one or more work items.\n\nissue ID\n\nissue ID: \u003cissue ID\u003e, #\u003cissue ID\u003e\n\nReturns an issue that matches the specified issue ID. This attribute can also be referenced as a single value with the syntax #\u003cissue ID\u003e or -\u003cissue ID\u003e. When the search returns a single issue, the result is displayed in single issue view.\n\nIf you don't use the syntax for an attribute-based search (issue ID: \u003cvalue\u003e or #\u003cvalue\u003e), the input is also parsed as a text search. In addition to any issue that matches the specified issue ID, the search results include any issue that contains the specified ID in any text attribute.\n\nIf you set the issue ID in quotes, the input is only parsed as a text search. The search results only include issues that contain the specified ID in a text attribute.\n\nNote that even when an issue ID is parsed as a text search, the results do not include issue links. To find issues based on issue links, use the links attribute or reference a specific link type.\n\nlinks\n\nlinks: \u003cissue ID\u003e\n\nReturns all issues that contain links to the specified issue.\n\nlooks like\n\nlooks like: \u003cissue ID\u003e\n\nReturns issues in which the issue summary or description contains words that are found in the issue summary or description in the specified issue. Issues that contain matching words in the issue summary are given higher weight when the search results are sorted by relevance.\n\nmentioned in\n\nmentioned in: \u003cissue id\u003e\n\nReturns issues with issue IDs referenced in the description or a comment of the target issue. Issue IDs in supplemental text fields aren't included in the search results.\n\nmentions\n\nmentions: \u003cissue id\u003e, \u003cuser\u003e\n\nReturns issues that contain either @mention for the specified user or issue IDs referenced in the description or a comment. User mentions and issue IDs in supplemental text fields aren't included in the search results.\n\norganization\n\norganization: \u003corganization name\u003e\n\nReturns issues that belong to the specified organization. This attribute can also be referenced as a single value.\n\nproject\n\nproject: \u003cproject name\u003e | \u003cproject ID\u003e\n\nReturns issues that belong to the specified project. This attribute can also be referenced as a single value.\n\nreporter\n\nreporter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues and tickets that were created by the specified user or a member of the specified group, including tickets created on behalf of the specified user. Use me to return issues that were created by the current user.\n\nresolved date\n\nresolved date: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that were resolved on a specific date or within a specified time frame.\n\nsaved search\n\nsaved search: \u003csaved search name\u003e\n\nReturns issues that match the search criteria of a saved search. This attribute can also be referenced as a single value with the syntax #\u003csaved search name\u003e or -\u003csaved search name\u003e.\n\nsubmitter\n\nsubmitter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were submitted by the specified user or a member of the specified group on behalf of another user. Use me to return issues that were submitted by the current user.\n\nsummary\n\nsummary: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in the issue summary.\n\ntag\n\ntag: \u003ctag name\u003e\n\nReturns issues that match a specified tag. This attribute can also be referenced as a single value with the syntax #\u003ctag name\u003e or -\u003ctag name\u003e\n\nupdated\n\nupdated: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues where the most recent change occurred on a specific date or within a specified time frame.\n\nupdater\n\nupdater: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were last updated by the specified user or a member of the specified group. Use me to return issues to which you applied the last update.\n\nvcs changes\n\nvcs changes: \u003ccommit hash\u003e\n\nReturns issues that contain vcs changes that were applied in the commit object that is identified by the specified SHA-1 commit hash.\n\nvisible to\n\nvisible to: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that are visible to the specified user or a member of the specified group.\n\nvoter\n\nvoter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that have votes from the specified user or a member of the specified group.\n\nCustom Fields\n\nYou can find issues that are assigned specific values in a custom field. As with other issue attributes, you use the syntax attribute: value or attribute: -value. In this case, the attribute is the name of the custom field. In most cases, you can reference values directly with the # or - symbols.\n\nFor custom fields that are assigned an empty value, you can reference this property as a value. For example, to search for issues that are not assigned to a specific user, enter Assignee: Unassigned or #Unassigned. If the field is not assigned an empty value, find issues that do not store a value in the field with the syntax \u003cfield name\u003e: {No \u003cfield name\u003e} or has: -\u003cfield name\u003e.\n\nThis section lists the search attributes for default custom fields. Note that default fields and their values can be customized. The actual field names, values, and aliases may vary.\n\nAffected versions\n\nAffected versions: \u003cvalue\u003e\n\nReturns issues that were detected in a specific version of the product.\n\nAssignee\n\nAssignee: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns all issues that are assigned to the specified user or a member of the specified group.\n\nFix versions\n\nFix versions: \u003cvalue\u003e\n\nReturns issues that were fixed in a specific version of the product.\n\nFixed in build\n\nFixed in build: \u003cvalue\u003e\n\nReturns issues that were fixed in the specified build.\n\nPriority\n\nPriority: \u003cvalue\u003e\n\nReturns issues that match the specified priority level.\n\nState\n\nState: \u003cvalue\u003e | Resolved | Unresolved\n\nReturns issues that match the specified state.\n\nThe Resolved and Unresolved states cannot be assigned to an issue directly, as they are properties of specific values that are stored in the State field.\n\nBy default, Fixed, Won't fix, Duplicate, Incomplete, Obsolete, and Can't reproduce states are set as Resolved.\n\nThe Submitted, Open, In Progress, Reopened, and To be discussed states are set as Unresolved.\n\nSubsystem\n\nSubsystem: \u003cvalue\u003e\n\nReturns issues that are assigned to a specific subsystem within a project.\n\nType\n\nType: \u003cvalue\u003e\n\nReturns issues that match the specified issue type.\n\nIssue Links\n\nYou can search for issues based on the links that connect them to other issues. Search queries that reference a specific issue link type can be interpreted in two different ways:\n\nWhen specified as \u003clink type\u003e: \u003cissue ID\u003e, the query returns issues linked to the specified issue using this link type.\n\nUsing \u003clink type\u003e: (\u003csub-query\u003e), the query returns issues linked to any issue that matches the specified sub-query using this link type.\n\nWhen searching for linked issues, you can enter the outward name or inward name of any issue link type, then specify your search criteria.\n\nThis list contains search parameters for issue link types that are provided by default in YouTrack. The default issue link types can be customized, which means that the actual names may vary. You can also use this syntax to build search queries that refer to custom link types.\n\nlinks\n\nlinks: \u003cissue ID\u003e\n\nReturns issues that are linked to a target issue.\n\naggregate\n\naggregate \u003caggregation link type\u003e: \u003cissue ID\u003e\n\nReturns issues that are indirectly linked to a target issue. Use this search term to find, for example, issues that are parent issues for a parent issue or subtasks of issues that are also subtasks of a target issue. The results include any issue that is linked to the target issue using the specified link type, whether directly or indirectly.\n\nThis search argument is only compatible with aggregation link types.\n\nDepends on\n\nDepends on: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have depends on links to a target issue or any issue that matches the specified sub-query.\n\nDuplicates\n\nDuplicates: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have duplicates links to a target issue or any issue that matches the specified sub-query.\n\nIs duplicated by\n\nIs duplicated by: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have is duplicated by links to a target issue or any issue that matches the specified sub-query.\n\nIs required for\n\nIs required for: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have is required for links to a target issue or any issue that matches the specified sub-query.\n\nParent for\n\nParent for: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have parent for links to a target issue or any issue that matches the specified sub-query.\n\nRelates to\n\nRelates to: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have relates to links to a target issue or any issue that matches the specified sub-query.\n\nSubtask of\n\nSubtask of: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have subtask of links to a target issue or any issue that matches the specified sub-query.\n\nTime Tracking\n\nThere is a dedicated set of search attributes that you can use to find issues that contain time tracking data. These attributes look for specific values that have been added as work items to an issue.\n\nwork\n\nwork: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or phrase in a work item.\n\nwork author: \u003cuser\u003e\n\nReturns issues that have work items that were added by the specified user.\n\nwork type\n\nwork type: \u003cvalue\u003e\n\nReturns issues that have work items that are assigned the specified work type. The query work type: {No type} returns issues that have work items that are not assigned a work item type.\n\nwork date\n\nwork date: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that have work items that are recorded for the specified date or within the specified time frame.\n\ncustom work item attributes\n\nwork \u003cattribute name\u003e: \u003cattribute value\u003e\n\nReturns issues that have work items that are assigned the specified value for a specific work item attribute.\n\nSort Attributes\n\nYou can specify the sort order for the list of issues that are returned by the search query.\n\nYou can sort issues by any of the attributes on the following list. In the Search Query Grammar, these attributes represent the \u003cSortAttribute\u003e value.\n\nsort by\n\nsort by: \u003cvalue\u003e \u003csort order\u003e\n\nSorts issues that are returned by the query in the specified order.\n\nWhen you perform a text search, the results can be sorted by relevance. You cannot specify relevance as a sort attribute. For more information, see Sorting by Relevance.\n\nKeywords\n\nThere are a number of values that can be substituted with a keyword. When you use a keyword in a search query, you do not specify an attribute. A keyword is preceded by the number sign (#) or the minus operator. In the YouTrack Search Query Grammar, these keywords correspond to a \u003cSingleValue\u003e.\n\nme\n\nReferences the current user. This keyword can be used as a value for any attribute that accepts a user.\n\nWhen used as a single value (#me) the search returns issues that are assigned to, reported by, or commented by the current user.\n\nFor example, to find unresolved issues that are assigned to, reported by, or contain comments from the current user, enter #me -Resolved.\n\nThe results also include issues that contain references to the current user in any custom field that stores values as users. For example, you have a custom field Reviewed by that stores a user type. The search query #me -Resolved also includes issues that reference the current user in this custom field.\n\nmy\n\nAn alias for me.\n\nResolved\n\nThis keyword references the Resolved issue property. This property is set based on the current value or combination of values for any custom field that stores a state type. In the default State field, the Resolved property is enabled for the values Fixed, Won't fix, Duplicate, Incomplete, Obsolete, and Can't reproduce.\n\nFor projects that use multiple state-type fields, the Resolved property is only true when all the state-type fields are assigned values that are considered to be resolved.\n\nFor example, to find all resolved issues that were updated today, enter #Resolved updated: Today.\n\nUnresolved\n\nThis keyword references the Unresolved issue property. This property is set based on the current value or combination of values for any custom field that stores a state type. In the default State field, the Resolved property is disabled for the values Submitted, Open, In Progress, Reopened, and To be discussed.\n\nFor projects that use multiple state-type fields, the Unresolved property is true when any state-type field is assigned a value that is not considered to be resolved.\n\nFor example, to find all unresolved issues that are assigned to the user john.doe in the Test project, enter #Unresolved project: Test for: john.doe.\n\nReleased\n\nThis keyword references the Released property for values in a field that stores a version type. It can only be used together with the attribute name or alias for a version field. This means that it cannot be referenced as a single value.\n\nWith fields that store multiple values, the search query returns issues for which at least one of the versions that are stored in the field is marked as released.\n\nFor example, to find all issues in the Test project that are fixed in a version that has not yet been released, enter in: Test fixed in: -Released.\n\nArchived\n\nThis keyword references the Archived property for values in a field that stores a version type. It can only be used together with the attribute name or alias for a version field. This means that it cannot be referenced as a single value.\n\nWith fields that store multiple values, the search query only returns issues for which all the versions that are stored in the field are marked as archived.\n\nFor example, to find all issues in the Test project that are fixed in a version that has been archived, enter in: Test fixed in: Archived.\n\nOperators\n\nThe search query grammar applies default semantics to search queries that do not contain explicit logical operators.\n\nSearches that specify values for multiple attributes are treated as conjunctive. This means that the values are handled as if joined by an AND operator. For example, State: {In Progress} Priority: Critical returns issues that are assigned the specified state and priority.\n\nThis extends to queries that look for the presence or absence of a value for a specific attribute (has) in combination with a reverence to a specific value for the same attribute. The presence or absence of a value and the value itself are considered as separate attributes in the issue. For example, has: assignee Assignee: me only returns issues where the assignee is set and that assignee is you.\n\nFor text search, searches that include multiple words are treated as conjunctive. This means that the words are handled as if joined by an AND operator. For example, State: Open context usage returns issues that contain matching forms for both context and usage.\n\nSearches that include multiple values for a single attribute are treated as disjunctive. This means that the values are handled as if joined by an OR operator. For example, State: {In Progress}, {To be discussed} returns issues that are assigned either one or the other of these two states.\n\nYou can override the default semantics by applying explicit operators to the query.\n\nand\n\nThe AND operator combines matches for multiple search attributes to narrow down the search results. When you join search arguments with the AND operator, the resulting issues must contain matches for all the specified attributes. Use this operator for issue fields that store enum[*] types and tags.\n\nSearch arguments that are joined with an AND operator are always processed as a group and have a higher priority than other arguments that are joined with an OR operator in the query.\n\nHere are a few examples of search queries that contain AND operators:\n\nTo find issues in the Ktor project that are tagged as both Next build and to be tested, enter:\n\nin: Ktor and tag: {Next build} and tag: {to be tested}\n\nThe AND operator between the two tags ensures that the results only contain issues that have both tags.\n\nTo find all issues that are set as Critical priority in the Ktor project or are set as Major priority and are assigned to you in the Kotlin project, enter:\n\nin: Ktor #Critical or in: Kotlin #Major and for: me\n\nIf you were to remove the operators in this query, the references to the project and priority are parsed as disjunctive (OR) statements. The reference to the assignee (me) is then joined with a conjunctive (AND) statement. Instead of getting critical issues in the Ktor project plus a list of major-priority issues that you are assigned in Kotlin, you would only issues that are assigned to you that are either major or critical in either Ktor or Kotlin.\n\nor\n\nThe OR operator combines matches for multiple search attribute to broaden the search results.\n\nThis is very useful when searching for a term which has a synonym that might be used in an issue instead. For example, a search for lesson OR tutorial returns issues that contain matching forms for either \"lesson\" or \"tutorial\". If you remove the OR operator from the query, the search is performed conjunctively, which means the result would only include issues that contain matching forms for both words.\n\nHere's another example of a search query that contains an OR operator:\n\nTo find all issues in the Ktor project that are assigned to you or are tagged as to be tested in any project, enter:\n\nin: Ktor for: me or tag: {to be tested}\n\nParentheses\n\nUsing parentheses ( and ) combines various search arguments to change the order in which the attributes and operators are processed. The part of a search query inside the parentheses has priority and is always processed as a single unit.\n\nThe most common use of parentheses is to enclose two search arguments that are separated by an OR operator and further restrict the search results by joining additional search arguments with AND operators.\n\nAny time you use parentheses in a search query, you need to provide all the operators that join the parenthetical statement to neighboring search arguments. For example, the search query in: Kotlin #Critical (in: Ktor and for:me) cannot be processed. It must be written as in: Kotlin #Critical or (in: Ktor and for:me) instead.\n\nHere's an example of a search query that uses parentheses:\n\nTo find all issues that are assigned to you and are either assigned Critical priority in the Kotlin project or are assigned Major priority in the Ktor project, enter:\n\n(in: Kotlin #Critical or in: Ktor #Major) and for: me\n\nSymbols\n\nThe following symbols can be used to extend or refine a search query.\n\nSymbol\n\nDescription\n\nExamples\n\n-\n\nExcludes a subset from a set of search query results. When you use this symbol with a single value, do not use the number sign.\n\nTo find all unresolved issues except for issues with minor priority and sort the list of results by priority in ascending order, enter #unresolved -minor sort by: priority asc.\n\n#\n\nIndicates that the input represents a single value.\n\nTo find all unresolved issues in the MRK project that were reported by, assigned to, or commented by the current user, enter #my #unresolved in: MRK.\n\n,\n\nSeparates a list of values for a single attribute. Can be used in combination with a range.\n\nTo find all issues assigned to, reported or commented by the current user, which were created today or yesterday, enter #my created: Today, Yesterday.\n\n..\n\nDefines a range of values. Insert this symbol between the values that define the upper and lower ranges. The search results include the upper and lower bounds.\n\nTo find all issues fixed in version 1.2.1 and in all versions from 1.3 to 1.5, enter fixed in: 1.2.1, 1.3 .. 1.5.\n\nTo find all issues created between March 10 and March 13, 2018, enter created: 2018-03-10 .. 2018-03-13.\n\n*\n\nWildcard character. Its behavior is context-dependent.\n\nWhen used with the .. symbol, substitutes a value that determines the upper or lower bound in a range search. The search results are inclusive of the specified bound.\n\nWhen used in an attribute-based search, matches zero or more characters at the end of an attribute value. For more information, see Wildcards in Attribute-based Search.\n\nWhen used in text search, matches zero or more characters in a string. For more information, see Wildcards in Text Search.\n\nTo find all issues created on or before March 10, 2018, enter created: * .. 2018-03-10\n\nTo find issues that have tags that start with refactoring, enter tag: refactoring*.\n\nTo find unresolved issues that contain image attachments in PNG format, enter #Unresolved attachments: *.png.\n\n?\n\nMatches any single character in a string. You can only use this wildcard to search in attributes that store text. For more information, see Wildcards in Text Search.\n\nTo find issues that contain the words \"prioritize\" or \"prioritise\" in the issue description, enter description: prioriti?e\n\n{ }\n\nEncloses attribute values that contain spaces.\n\nTo find all issues with the Fixed state that have the tag to be tested, enter #Fixed tag: {to be tested}.\n\nDate and Period Values\n\nSeveral search attributes reference values that are stored as a date. You can search for dates as single values or use a range of values to define a period.\n\nSpecify dates in the format: YYYY-MM-DD or YYYY-MM or MM-DD. You also can specify a time in 24h format: HH:MM:SS or HH:MM. To specify both date and time, use the format: YYYY-MM-DD}}T{{HH:MM:SS. For example, the search query created: 2010-01-01T12:00 .. 2010-01-01T15:00 returns all issues that were created on 1 January 2010 between 12:00 and 15:00.\n\nPredefined Relative Date Parameters\n\nYou can also use pre-defined relative parameters to search for date values. The values for these parameters are calculated relative to the current date according to the time zone of the current user. The actual value for each parameter is shown in the query assist panel.\n\nThe following relative date parameters are supported:\n\nParameter\n\nDescription\n\nNow\n\nThe current instant.\n\nToday\n\nThe current calendar day.\n\nTomorrow\n\nThe next calendar day.\n\nYesterday\n\nThe previous calendar day.\n\nSunday\n\nThe calendar Sunday for the current week.\n\nMonday\n\nThe calendar Monday for the current week.\n\nTuesday\n\nThe calendar Tuesday for the current week.\n\nWednesday\n\nThe calendar Wednesday for the current week.\n\nThursday\n\nThe calendar Thursday for the current week.\n\nFriday\n\nThe calendar Friday for the current week.\n\nSaturday\n\nThe calendar Saturday for the current week.\n\n{Last working day}\n\nThe most recent working day as defined by the Workdays that are configured in the settings on the Time Tracking page in YouTrack.\n\n{This week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the current week.\n\n{Last week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the previous week.\n\n{Next week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the next week.\n\n{Two weeks ago}\n\nThe period from 00:00 Monday to 23:59 Sunday for the calendar week two weeks prior to the current date.\n\n{Three weeks ago}\n\nThe period from 00:00 Monday to 23:59 Sunday for the calendar week three weeks prior to the current date.\n\n{This month}\n\nThe period from the first day to the last day of the current calendar month.\n\n{Last month}\n\nThe period from the first day to the last day of the previous calendar month.\n\n{Next month}\n\nThe period from the first day to the last day of the next calendar month.\n\nOlder\n\nThe period from 1 January 1970 to the last day of the month two months prior to the current date.\n\nCustom Date Parameters\n\nIf the predefined date parameters don't help you find issues that matter most to you, define your own date range in your search query. Here are a few examples of the queries you can write with custom date parameters:\n\nFind issues that have new comments added in the last seven days:\n\ncommented: {minus 7d} .. Today\n\nFind issues that were updated in the last two hours:\n\nupdated: {minus 2h} .. *\n\nFind unresolved issues that are at least one and a half years old:\n\ncreated: * .. {minus 1y 6M} #Unresolved\n\nFind issues that are due in five days:\n\nDue Date: {plus 5d}\n\nTo define a custom time frame in your search queries, use the following syntax:\n\nTo specify dates or times in the past, use minus.\n\nTo specify dates or times in the future, use plus.\n\nSpecify the time frame as a series of whole numbers followed by a letter that represents the unit of time. Separate each unit of time with a space character. For example:\n\n2y 3M 1w 2d 12h\n\nQueries that specify hours will filter for events that took place during the specified hour. For example, if it is currently 15:35, a query that is written as created: {minus 48h} returns issues that were created two days ago, at any time between 3 and 4 PM. Meanwhile, a query that is written as created: {minus 2d} returns all issues that were created two days ago at any time between midnight and 23:59.\n\nThis level of precision only applies to hours. A query that references the unit of time as 14d returns exactly the same results as 2w.\n\nSearch queries that specify units of time shorter than one hour (minutes, seconds) are not supported.\n\nSearch Query Grammar\n\nThis page provides a BNF description of the YouTrack search query grammar.\n\n\u003cSearchRequest\u003e ::= \u003cOrExpression\u003e \u003cOrExpession\u003e ::= \u003cAndExpression\u003e ('or' \u003cAndExpression\u003e)* \u003cAndExpression\u003e ::= \u003cAndOperand\u003e ('and' \u003cAndOperand\u003e)* \u003cAndOperand\u003e ::= '('\u003cOrExpression\u003e? ')' | Term \u003cTerm\u003e ::= \u003cTermItem\u003e* \u003cTermItem\u003e ::= \u003cQuotedText\u003e | \u003cNegativeText\u003e | \u003cPositiveSingleValue\u003e | \u003cNegativeSingleValue\u003e | \u003cSort\u003e | \u003cHas\u003e | \u003cCategorizedFilter\u003e | \u003cText\u003e \u003cCategorizedFilter\u003e ::= \u003cAttribute\u003e ':' \u003cAttributeFilter\u003e (',' \u003cAttributeFilter\u003e)* \u003cAttribute\u003e ::= \u003cname of issue field\u003e \u003cAttributeFilter\u003e ::= ('-'? \u003cValue\u003e ) | ('-'? \u003cValueRange\u003e) | \u003cLinkedIssuesQuery\u003e \u003cLinkedIssuesQuery\u003e ::= ( \u003cOrExpression\u003e ) \u003cValueRange\u003e ::= \u003cValue\u003e '..' \u003cValue\u003e \u003cPositiveSingleValue\u003e ::= '#'\u003cSingleValue\u003e \u003cNegativeSingleValue\u003e ::= '-'\u003cSingleValue\u003e \u003cSingleValue\u003e ::= \u003cValue\u003e \u003cSort\u003e ::= 'sort by:' \u003cSortField\u003e (',' \u003cSortField\u003e)* \u003cSortField\u003e ::= \u003cSortAttribute\u003e ('asc' | 'desc')? \u003cHas\u003e ::= 'has:' \u003cAttribute\u003e (',' \u003cAttribute\u003e)* \u003cQuotedText\u003e ::= '\"' \u003ctext without quotes\u003e '\"' \u003cNegativeText\u003e ::= '-' \u003cQuotedText\u003e \u003cText\u003e ::= \u003ctext without parentheses\u003e \u003cValue\u003e ::= \u003cComplexValue\u003e | \u003cSimpleValue\u003e \u003cSimpleValue\u003e ::= \u003cvalue without spaces\u003e \u003cComplexValue\u003e ::= '{' \u003cvalue (can have spaces)\u003e '}'\n\nGrammar is case-insensitive.\n\nFor a complete list of search attributes, see Issue Attributes.\n\nTo see sample queries for common use cases, see Sample Search Queries.\n\n11 November 2025",
    "link": "https://www.jetbrains.com/help/youtrack/cloud/search-and-command-attributes.html",
    "snippet": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and ...",
    "title": "Search Query Reference | YouTrack Cloud Documentation - JetBrains"
  },
  {
    "content_readable": "Introduced in 2020, the GitHub user profile README allow individuals to give a long-form introduction. This multi-part tutorial explains how I setup my own profile to create dynamic content to aid discovery of my projects:\n\nwith the Liquid template engine and Shields (Part 1 of 4)\nusing GitHub's GraphQL API to query dynamic data about all my repos (keep reading below)\nfetching RSS and Social cards from third-party sites (Part 3 of 4)\nautomating updates with GitHub Actions (Part 4 of 4)\n\nYou can visit github.com/j12y to see the final result of what I came up with for my own profile page.\n\nThe GitHub Repo Gallery\n\nThe intended behavior for my repo gallery is to create something similar to pinned repositories but with a bit more visual pizzazz to identify what the projects are about.\n\nIn addition to source code, the repo can have metadata associated with it:\n\n✔️ Name of the repository\n✔️ Short description of the project\n✔️ Programming language used for the project\n✔️ List of tags / topics\n✔️ Image that can be used for social cards\n\nAbout\n\nThe About has editable fields to set the description and topics.\n\nSettings\n\nThe Settings includes a place to upload an image for social media preview cards.\n\nIf you don't set a preview card image, GitHub will generate one automatically that includes some basic profile statistics and your user profile image.\n\nGetting Started with the GitHub REST API\n\nThe way I structured this project is to build a library of any functions related to querying GitHub in src/gh.ts. I used a .env file to store my personal access (classic) token for authentication during local development.\n\n├── package.json\n├── .env\n├── src\n│   ├── app.ts\n│   ├── gh.ts\n│   └── template\n│       ├── README.liquid\n│       ├── contact.liquid\n│       └── gallery.liquid\n└── tsconfig.json\n\n\nI started by using REST endpoints with the Octokit library and TypeScript bindings.\n\n// src/gh.ts\nimport { Octokit } from 'octokit';\nimport { RestEndpointMethodTypes } from '@octokit/plugin-rest-endpoint-methods'\nconst octokit = new Octokit({ auth: process.env.TOKEN});\n\nexport class GitHub {\n    // GET /users/{user}\n    // https://docs.github.com/en/rest/users/users#get-a-user\n    async getUserDetails(user: string): Promise\u003cRestEndpointMethodTypes['users']['getByUsername']['response']['data']\u003e {\n        const { data } = await octokit.rest.users.getByUsername({\n            username: user\n        });\n\n        return data;\n    };\n}\n\n\nFrom src/app.ts I initialize the GithHub class, fetch the results, and can inspect the data being returned as a way to get comfortable with the various endpoints.\n\n// src/app.ts\nimport dotenv from 'dotenv';\nimport { GitHub } from \"./gh\";\n\nexport async function main() {\n  dotenv.config();\n  const gh = new GitHub()\n\n  const details = await gh.getUserDetails();\n  console.log(details);\n}\nmain();\n\n\nI typically get started on projects with simple tests like this to make sure all the various pieces to an integration can be configured and work together before getting too far.\n\nUse the GitHub GraphQL Endpoint\n\nTo get the data needed for the gallery layout, it would be necessary to make multiple calls to REST endpoints. In addition there is some data not yet available from the REST endpoint at all.\n\nSwitching to query using the GitHub GraphQL interface becomes helpful. This single endpoint can process a number of queries and give precise control over the data needed.\n\n💡 The GitHub GraphQL Explorer was fundamentally useful for me to get the right queries defined\n\nThis query needs authorization with the personal access token to fetch profile details about followers similar to some of the details returned from the REST endpoints.\n\n// src/gh.ts\n\nconst { graphql } = require(\"@octokit/graphql\")\n\nexport class GitHub \n    // https://docs.github.com/en/graphql\n    graphqlWithAuth = graphql.defaults({\n        headers: {\n            authorization: `token ${process.env.TOKEN}`\n        }\n    })\n\n    async getProfileOverview(name: string): Promise\u003cany\u003e {\n        const query = `\n            query getProfileOverview($name: String!) { \n                user(login: $name) { \n                    followers(first: 100) {\n                        totalCount\n                        edges {\n                            node {\n                                login\n                                name\n                                twitterUsername\n                                email\n                            }\n                        }\n                    }\n                }\n            }\n        `;\n        const params = {'name': name};\n\n        return await this.graphqlWithAuth(query, params);\n    }\n}\n\n\nThere are other resources such as Learn GraphQL if you haven't written many queries yet which explains the basics around syntax, schemas, and types.\n\nGetting used to GitHub's GraphQL schema primarily involves walking a series of edges to find linked nodes for objects of interest and their data attributes. In this case, I started by querying a user profile, finding the list of linked followers, and then inspecting their corresponding node's login, name, and email address.\n\n   ┌────────────┐\n   │    user    │\n   └─────┬──────┘\n         │\n         └──followers\n               │\n               ├─── totalCount\n               │\n               └─── edges\n                     │\n                     └── node\n\n\n\nFaceted Search by Topic Frequency\n\nI often want to find repositories by a topic. The user interface makes it easy to filter among many repositories by programming language such as python but unless you know which topics are relevant can become hit or miss. Was it nlp or nltk I used to categorize related repositories. Did I use dolby or dolbyio to identify repos I have for work projects.\n\nA faceted search that narrows down the number of matching repositories can be helpful for finding relevant projects like this. Given topics on GitHub are open-ended and not constrained to fixed values, it can be easy to accidentally categorize repos with variations like lambda and aws-lambda such that searches only identify partial results.\n\nTo address this, a GraphQL query gathering topics by frequency of usage within an organization or individual account can help with identifying the most useful topics.\n\nThe steps for this would be:\n\nQuery repository topics\nProcess results to group topics by frequency\nUse a template to render the gallery\n\n1 - Query Repository Topics\n\nI used the following GraphQL query to fetch my repositories and their corresponding topics.\n\nconst query = `\n    query getReposOverview($name: String!) {\n        user(login: $name) {\n            repositories(first: 100 ownerAffiliations: OWNER) {\n                edges {\n                    node {\n                        name\n                        url\n                        description\n                        openGraphImageUrl\n                        repositoryTopics(first: 100) {\n                            edges {\n                                node {\n                                    topic {\n                                        name\n                                    }\n                                }\n                            }\n                        }\n                        primaryLanguage {\n                            name\n                        }\n                    }\n                }\n            }\n        }\n    }\n`;\n\n\nThis query starts by filtering by user owned repositories (not counting forks) along with the metadata such as the social image.\n\n2 - Process Results and Group Topics by Frequency\n\nIterating over the results of the query the convention used was to look for anything with the topic github-gallery as something to be featured in the gallery. We also get a count of usage for each of the other topics and programming languages.\n\nvar topics: {[id: string]: number } = {};\nvar languages: {[id: string]: number } = {};\nvar gallery: {[id: string]: any } = {};\n\nconst repos = await gh.getReposOverview(user);\nfor (let repo of repos.user.repositories.edges) {\n  // Count occurrences of each topic\n  repo.node.repositoryTopics.edges.forEach((topic: any) =\u003e {\n    if (topic.node.topic.name == 'github-gallery') {\n      gallery[repo.node.name] = repo;\n    } else {\n      topics[topic.node.topic.name] = topic.node.topic.name in topics ? topics[topic.node.topic.name] + 1 : 1;\n    }\n  });\n\n  // Count and include count of language used\n  if (repo.node.primaryLanguage) {\n    languages[repo.node.primaryLanguage.name] = repo.node.primaryLanguage.name in languages ? languages[repo.node.primaryLanguage.name] + 1 : 1;\n  }\n}\n\n\n3 - Use a template to render the gallery\n\nThe topics are ordered by how often they are used. From the previous post on setting up a dynamic profile, I'm passing scope to the liquid engine for any data to be made available in a template.\n\n  // Share topics sorted by frequency of use for filtering repositories\n  // from the organization\n  scope['topics'] = Object.entries(topics).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n  scope['languages'] = Object.entries(languages).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n\n  // Gather topics across repos\n  scope['gallery'] = Object.values(gallery);\n\n\n\nThe repository page on GitHub uses query parameters to sort and filter, so items like topic:nltk can be passed directly in the URL to load a filtered view of repositories. The shields create a nice looking button for navigating to the topic, and use of icons for programming languages helps find relevant code samples.\n\n\u003cp\u003eExplore some of my projects: \u003cbr/\u003e\n{% for language in languages %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=language%3A{{language[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ language[0] }}-{{ language[1] }}-lightgrey?logo={{ language[0] }}\u0026label={{ language[0] }}\u0026labelColor=000000\" alt=\"{{ language[0] }}\"/\u003e\u003c/a\u003e {% endfor %}\n{% for topic in topics %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/static/v1?label={{topic[0]}}\u0026message={{ topic[1] }}\u0026labelColor=blue\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\n\nThe presentation includes a 3-column row in a table for displaying the metadata about each featured gallery project. This could display all repositories, but limiting to one or two rows seems sensible for managing screen space.\n\n{% for tile in gallery limit:3 %}\n\u003ctd width=\"25%\" valign=\"top\" style=\"padding-top: 20px; padding-bottom: 20px; padding-left: 30px; padding-right: 30px;\"\u003e\n\u003ca href=\"{{ tile.node.url }}\"\u003e\u003cimg src=\"{{ tile.node.openGraphImageUrl }}\"/\u003e\u003c/a\u003e\n\u003cp\u003e\u003cb\u003e\u003ca href=\"{{ tile.node.url }}\"\u003e{{ tile.node.name }}\u003c/b\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e{{ tile.node.description }}\u003cbr/\u003e\n{% for topic in tile.node.repositoryTopics.edges %} \u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic.node.topic.name }}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ topic.node.topic.name | replace: \"-\", \"--\" }}-blue?style=pill\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\u003c/td\u003e\n{% endfor %}\n\n\nWith all of that put together, we now have a gallery that displays a picture along with the name, description, and tags. The picture can highlight a user interface, architectural diagram, or some other branded visual to help identify the purpose of the project visually.\n\nWe can also use this to maintain our list of topics and make finding relevant topics for an audience easier to discover.\n\nLearn more\n\nI hope this overview helps with getting yourself sorted. The next article will dive into some of the other ways of aggregating content.\n\nFetching RSS and Social Cards for GitHub Profile (Part 3 of 4)\nAutomating GitHub Profile Updates with Actions (Part 4 of 4)\n\nDid this help you get your own profile started? Let me know and follow to get notified about updates.",
    "link": "https://dev.to/j12y/query-github-repo-topics-using-graphql-35ha",
    "snippet": "Creating a customized user profile page for GitHub to showcase work projects and make navigation to relevant topics easier.",
    "title": "Query GitHub Repo Topics Using GraphQL - DEV Community"
  },
  {
    "content_readable": "Updated\n\n4 days ago\n\nWith millions of conversations happening all over the web each day, it can be a long and tedious task trying to get more relevant mentions and tighten the scope of your query, but with the help of Advanced Topic Query, it can be at your fingertips.\n\nIn Social Listening, you have the option to create an advanced query that is not limited to ANY, ALL, or NONE formatting of query building. Advanced query builder can be used to form complex text queries which are not possible with a normal query builder.\n\nWhat is an Advanced Topic Query?\n\nAdvanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more.\n\nBy using advanced query you can pinpoint relevant information which is not possible with basic topic query.\n\nIt gives you the power to find the needle in a haystack.\n\n​\n\nBasic Topic Query v/s Advanced Topic Query\n\nWith more operators to use you can fetch conversations by language, geography, social media channel, volume, author, #listening, @account monitoring, user segment, and much more, it can give you access to more actionable insights.\n\nIn Basic Query, you can only use boolean operators like OR/ NOT/ AND/ along with NEAR. On the other hand, in Advanced Topic Query, it gives you access to use OR with/ inside AND, NOT (nested and within operator use cases), advanced operators, exact match operators etc.\n\nLet's see the use cases where advanced query will help in getting more insightful mentions –\n\nUse case #1: To search \"pepsi\" OR \"drink\" along with \"cups\".\n\nBasic Query\n\nAdvancd Query\n\nUse case #2: To get mentions of \"pepsi\" along with \"coke\" or \"sprite\" but not \"miranda\" with people having \"follower count\" between 100 to 1000 on \"twitter\".\n\nBasic Query\n\nAdvanced Query\n\nNot feasible in the basic Topic query\n\nThis is where we need the advanced Topic query.​\n\nHow to create an advanced Topic query?\n\nClick the New Tab icon. Under Sprinklr Insights, Click Topics within Listening.\n\nOn the Topics window, click Add Topic in the top right corner. Fill in the required fields and click Create.\n\nIn the Setup Query tab of Create New Topic window, select Advanced Query in the query section.\n\n​\n\nType your query in the Advanced Query field with the required operators and syntax.\n\nClick Save.\n\nTip: While using Instagram as a Listening Source, be sure that your query keywords include hashtags.\n\nWhich operators to use for building Topic queries?\n\nOperators for Topic queries\n\nIn creation of advanced queries along with boolean operators OR/ AND/ NOT/ etc, Sprinklr also supports operator types –\n\nSearch Operators\n\nExact Match Operators\n\nOperators for Getting Post Replies/Comments​\n\nSprinklr provides its user edge by giving them power to use Keywords List inside advanced query along with Operators mentioned.\n\nCreate query using Topic query operators\n\nFollowing are some most used operator examples and their results –\n\nOperator\n\nExample\n\nResult\n\nhello\n\nSearch for the term \"hello\"\n\nsocial sprinklr\n\nSearch for the phrases \"social\" and \"sprinklr\"\n\n​\n\nNote: Using this will show preview but topic can not be saved as it will show error, Use \"Social Sprinklr\" or (Social AND/OR/ NOT/ NEAR Sprinklr) to eliminate error.\n\nAND\n\nsocial AND sprinklr\n\nSearch for \"social\" and \"sprinklr\" anywhere within the complete message, irrespective of keywords between them\n\nOR\n\nsocial OR sprinklr\n\nSearch for \"social\" or \"sprinklr\"\n\nNOT\n\n\"social media\" NOT \"facebook\"\n\nSearch for results that contain \"social media\" but not \"facebook\"\n\n~\n\n\"social media\"~10\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNEAR\n\nsocial NEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNote: This operator can be used with keyword lists.\n\nONEAR\n\nsocial ONEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other in an ordered way\n\nNote: This operator searches social ahead of media.\n\ntitle\n\ntitle: (\"social media\")\n\nSearch for social media in the title of the message\n\nNote: It is mostly used for News, blogs, reviews and other sites.\n\nauthor\n\nauthor: \"social_media\"\n\nFetches all the mentions from author name: social_media\n\nSome other operators which are supported by Sprinklr are –\n\nProximity: It is used to define proximity or distance between 2 keywords only, whereas, NEAR can be used to define proximity between two keywords as well as keyword lists.\n\nOnear (Ordered Near): It sets the order in which the keywords will appear. For example, Keyword-List1 ONEAR/10 Keyword-List2 will ensure keywords from Keyword-List1 appear first and then Keyword-List2 keywords will follow within space of maximum 10.\n\nStep by step guide to make advanced Topic query\n\nUse case\n\nTo write query fetching mentions of ZARA –\n\n​\n\n(# listening is used for instagram listening)\n\nGetting mention along with clothing or fashion related terms only –\n\nRemoving profanity from mention (use case specific) –\n\nRemoving profanity from mention (use case specific) –\n\nAs social media has lots of profane words you can also remove it by making a keyword list and negating it from query –\n\nFiltering Mentions in English –\n\n​\n\nApplying source input as Twitter –\n\nGetting mentions of those users which have followers between 100 to 1000 –\n\n​\n\nAdvanced example showcasing use of Topic query operators and keyword list –\n\nBest practices while using Advanced Query\n\nUse of Parentheses\n\n​Parentheses are not necessary to enclose a search query but can be useful while grouping operations together for more complex queries.\n\n​\n\nFor example, if you want to return results that mention Samsung or Apple phones, and also want to query content that mentions phones along with either Apple or Samsung, you could use parentheses around Apple and Samsung to group three keywords together, as shown below –\n\nphone AND (Apple OR Samsung)\n\n​\n\nUse of parentheses within brackets, is further explained below with an example –\n\n[(internet of things ~3) OR iot OR internetofthings) AND (robots OR robot OR #robot)] NOT [things]\n\nTip: You can also use parentheses within brackets to set off additional operations within the Advanced Query field. The end result should look similar to the result summary of a basic query, built using multiple operations within a single section.\n\n\nAs a part of the rest of the query, this will perform the following operations –\n\nSearch for posts that contain the phrase \"internet of things\" or \"#internetofthings\"\n\nFrom within those results, keep any result that also says \"robots\" or \"robot\" or \"#robot\" within three words (a proximity search) of either \"internet of things\" or \"iot\" or \"internetofthings\".\n\nDiscard any results that just have the phrase \"things\" within.\n\nParentheses nested within brackets intend to set off different operations as isolated processes. In the previous example, if you build an Advanced Query that states [(internet of things OR iot OR internet of things) AND (robots OR robot OR #robot)] your query will return results that contain ANY of the first three terms and the second three terms.\n\nHowever, if you build an Advanced Query that states [internet of things OR iot OR internet of things AND robots OR robot OR #robot], your query will return any result that contains the phrase \"internet of things\" or the word \"iot\" or the word \"robot\" or the hashtag #robot or specifically the phrase \"internet of things\" within the same message as the word \"robots\".\n\nNote:\n\nYou cannot use a \"NOT\" statement with an \"OR\" statement.\n\n\nExample:\n( social OR NOT media ) ❌\n( social NOT media ) ✅\n\n(( social OR ( media NOT facebook )) ✅\n\nWhy?\n\nQuery should not contain \"NOT\" terms in \"OR\" with other terms, \"NOT\" clauses should be used in \"AND\" with other terms, using \"NOT\" in \"OR\" will bring too much data.\n\nUse of Quotation marks\n\nQuotation marks can be used for phrases in which you are looking for an exact match of those particular words in a specific order. Using parentheses or quotation marks for single-word queries is not mandatory.\n\nUse straight quotation marks ( \" \" ) for outlining phrases within it. The use of curved quotation marks (“ ”) will not produce your desired results.\n\nParentheses are generally used to group keywords or phrases joined by one or more operators together, but with other keywords involved, parentheses and quotations would act differently. For example –\n\nVersion 1: \"Phil Schiller\" AND \"Apple Marketing\" will return results for content with the exact phrase Phil Schiller (or phil schiller) and the exact phrase Apple Marketing (or apple marketing).\n\nNote: Here exact does not mean case sensitive as in the case of exactMessage Operator.\n\nExample: exactMessage: (\"Phil Schiller\" AND \"Apple Marketing\"), which will fetch results for phrase Phil Schiller (not phil schiller) and the exact phrase Apple Marketing (not apple marketing).\n\n\nVersion 2: \"Phil Schiller\" AND (Apple OR Marketing) will return results for content with the phrase \"Phil Schiller\" (together) and at least one of the words, Apple or Marketing.\n\nHandling for Broad \u0026 Ambiguous Keywords\n\nIt is very important to not use/reduce use of broad keywords in advanced queries. Broader keywords will fetch mentions that are unrelated to topic of interest, and eventually hinder dashboard/insights\n\nFor all keywords used in an advanced topic query, ensure they are directly related to the topic of interest.\n\nIn case keywords are broad but relevant to topic, they should be tied to some relevant keywords related to that topic, by using NEAR Operators\n\nExample: Robot is an important keyword for Robot Company. However just using this keyword will fetch irrelevant keywords as it’s a broad keyword used for other entities as well (Robot Street, etc).\n\nInstead of using just Robot keyword, we should use: Robot NEAR/4 (Technology OR “machine” OR # tech OR IOT OR “Internet of things” ….)\n\nNote how keywords related to Robot are used with NEAR Operator. Related keywords could be related entities, industry keywords, parent company, country keywords, etc.\n\nFrequently asked questions\n\n​\n\nIs it compulsory to put quotation marks around phrases like \"apple music\" or can we use apple music directly?\n\nHow can I eliminate posts with many spam #’s or @’s?\n\nCan exact match or parent operators be used in advanced query?\n\nWhy am I able to see mentions in preview during making of topic but not in dashboard?\n\nDuring listening to @ mentions a lot of spam mentions are also getting tagged along, e.g. like wanting to get mentions of @tom but messages of @tom_fan56 are also coming. How to remove these irrelevant mentions?\n\nIf I write query as “tom” will it also fetch mentions such as tom_jerry / @tom / #tom ?\n\n​",
    "link": "https://www.sprinklr.com/help/articles/faqs-and-advanced-usecases/create-an-advanced-topic-query/646331628ea3c9635cf36711",
    "snippet": "Advanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more. By ...",
    "title": "‎Create an Advanced Topic Query | Sprinklr Help Center"
  },
  {
    "content_readable": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL). To learn about the query language used by Resource Graph, start with the tutorial for KQL.\n\nThis article covers the language components supported by Resource Graph:\n\nUnderstanding the Azure Resource Graph query language\n\nResource Graph tables\nExtended properties\nResource Graph custom language elements\n\nShared query syntax (preview)\nSupported KQL language elements\n\nSupported tabular/top level operators\nQuery scope\nEscape characters\nNext steps\n\nResource Graph tables\n\nResource Graph provides several tables for the data it stores about Azure Resource Manager resource types and their properties. Resource Graph tables can be used with the join operator to get properties from related resource types.\n\nResource Graph tables support the join flavors:\n\ninnerunique\ninner\nleftouter\nfullouter\n\nResource Graph table Can join other tables? Description\nAdvisorResources Yes Includes resources related to Microsoft.Advisor.\nAlertsManagementResources Yes Includes resources related to Microsoft.AlertsManagement.\nAppServiceResources Yes Includes resources related to Microsoft.Web.\nAuthorizationResources Yes Includes resources related to Microsoft.Authorization.\nAWSResources Yes Includes resources related to Microsoft.AwsConnector.\nAzureBusinessContinuityResources Yes Includes resources related to Microsoft.AzureBusinessContinuity.\nChaosResources Yes Includes resources related to Microsoft.Chaos.\nCommunityGalleryResources Yes Includes resources related to Microsoft.Compute.\nComputeResources Yes Includes resources related to Microsoft.Compute Virtual Machine Scale Sets.\nDesktopVirtualizationResources Yes Includes resources related to Microsoft.DesktopVirtualization.\nDnsResources Yes Includes resources related to Microsoft.Network.\nEdgeOrderResources Yes Includes resources related to Microsoft.EdgeOrder.\nElasticsanResources Yes Includes resources related to Microsoft.ElasticSan.\nExtendedLocationResources Yes Includes resources related to Microsoft.ExtendedLocation.\nFeatureResources Yes Includes resources related to Microsoft.Features.\nGuestConfigurationResources Yes Includes resources related to Microsoft.GuestConfiguration.\nHealthResourceChanges Yes Includes resources related to Microsoft.Resources.\nHealthResources Yes Includes resources related to Microsoft.ResourceHealth.\nInsightsResources Yes Includes resources related to Microsoft.Insights.\nIoTSecurityResources Yes Includes resources related to Microsoft.IoTSecurity and Microsoft.IoTFirmwareDefense.\nKubernetesConfigurationResources Yes Includes resources related to Microsoft.KubernetesConfiguration.\nKustoResources Yes Includes resources related to Microsoft.Kusto.\nMaintenanceResources Yes Includes resources related to Microsoft.Maintenance.\nManagedServicesResources Yes Includes resources related to Microsoft.ManagedServices.\nMigrateResources Yes Includes resources related to Microsoft.OffAzure.\nNetworkResources Yes Includes resources related to Microsoft.Network.\nPatchAssessmentResources Yes Includes resources related to Azure Virtual Machines patch assessment Microsoft.Compute and Microsoft.HybridCompute.\nPatchInstallationResources Yes Includes resources related to Azure Virtual Machines patch installation Microsoft.Compute and Microsoft.HybridCompute.\nPolicyResources Yes Includes resources related to Microsoft.PolicyInsights.\nRecoveryServicesResources Yes Includes resources related to Microsoft.DataProtection and Microsoft.RecoveryServices.\nResourceChanges Yes Includes resources related to Microsoft.Resources.\nResourceContainerChanges Yes Includes resources related to Microsoft.Resources.\nResourceContainers Yes Includes management group (Microsoft.Management/managementGroups), subscription (Microsoft.Resources/subscriptions) and resource group (Microsoft.Resources/subscriptions/resourcegroups) resource types and data.\nResources Yes The default table if a table isn't defined in the query. Most Resource Manager resource types and properties are here.\nSecurityResources Yes Includes resources related to Microsoft.Security.\nServiceFabricResources Yes Includes resources related to Microsoft.ServiceFabric.\nServiceHealthResources Yes Includes resources related to Microsoft.ResourceHealth/events.\nSpotResources Yes Includes resources related to Microsoft.Compute.\nSupportResources Yes Includes resources related to Microsoft.Support.\nTagsResources Yes Includes resources related to Microsoft.Resources/tagnamespaces.\n\nFor a list of tables that includes resource types, go to Azure Resource Graph table and resource type reference.\n\nNote\n\nResources is the default table. While querying the Resources table, it isn't required to provide the table name unless join or union are used. But the recommended practice is to always include the initial table in the query.\n\nTo discover which resource types are available in each table, use Resource Graph Explorer in the portal. As an alternative, use a query such as \u003ctableName\u003e | distinct type to get a list of resource types the given Resource Graph table supports that exist in your environment.\n\nThe following query shows a simple join. The query result blends the columns together and any duplicate column names from the joined table, ResourceContainers in this example, are appended with 1. As ResourceContainers table has types for both subscriptions and resource groups, either type might be used to join to the resource from Resources table.\n\nResources\n| join ResourceContainers on subscriptionId\n| limit 1\n\n\nThe following query shows a more complex use of join. First, the query uses project to get the fields from Resources for the Azure Key Vault vaults resource type. The next step uses join to merge the results with ResourceContainers where the type is a subscription on a property that is both in the first table's project and the joined table's project. The field rename avoids join adding it as name1 since the property already is projected from Resources. The query result is a single key vault displaying type, the name, location, and resource group of the key vault, along with the name of the subscription it's in.\n\nResources\n| where type == 'microsoft.keyvault/vaults'\n| project name, type, location, subscriptionId, resourceGroup\n| join (ResourceContainers | where type=='microsoft.resources/subscriptions' | project SubName=name, subscriptionId) on subscriptionId\n| project type, name, location, resourceGroup, SubName\n| limit 1\n\n\nNote\n\nWhen limiting the join results with project, the property used by join to relate the two tables, subscriptionId in the above example, must be included in project.\n\nExtended properties\n\nAs a preview feature, some of the resource types in Resource Graph have more type-related properties available to query beyond the properties provided by Azure Resource Manager. This set of values, known as extended properties, exists on a supported resource type in properties.extended. To show resource types with extended properties, use the following query:\n\nResources\n| where isnotnull(properties.extended)\n| distinct type\n| order by type asc\n\n\nExample: Get count of virtual machines by instanceView.powerState.code:\n\nResources\n| where type == 'microsoft.compute/virtualmachines'\n| summarize count() by tostring(properties.extended.instanceView.powerState.code)\n\n\nResource Graph custom language elements\n\nShared query syntax (preview)\n\nAs a preview feature, a shared query can be accessed directly in a Resource Graph query. This scenario makes it possible to create standard queries as shared queries and reuse them. To call a shared query inside a Resource Graph query, use the {{shared-query-uri}} syntax. The URI of the shared query is the Resource ID of the shared query on the Settings page for that query. In this example, our shared query URI is /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS. This URI points to the subscription, resource group, and full name of the shared query we want to reference in another query. This query is the same as the one created in Tutorial: Create and share a query.\n\nNote\n\nYou can't save a query that references a shared query as a shared query.\n\nExample 1: Use only the shared query:\n\nThe results of this Resource Graph query are the same as the query stored in the shared query.\n\n{{/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS}}\n\n\nExample 2: Include the shared query as part of a larger query:\n\nThis query first uses the shared query, and then uses limit to further restrict the results.\n\n{{/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS}}\n| where properties_storageProfile_osDisk_osType =~ 'Windows'\n\n\nSupported KQL language elements\n\nResource Graph supports a subset of KQL data types, scalar functions, scalar operators, and aggregation functions. Specific tabular operators are supported by Resource Graph, some of which have different behaviors.\n\nSupported tabular/top level operators\n\nHere's the list of KQL tabular operators supported by Resource Graph with specific samples:\n\nKQL Resource Graph sample query Notes\ncount Count key vaults\ndistinct Show resources that contain storage\nextend Count virtual machines by OS type\njoin Key vault with subscription name Join flavors supported: innerunique, inner, leftouter, and fullouter. Limit of three join or union operations (or a combination of the two) in a single query, counted together, one of which might be a cross-table join. If all cross-table join use is between Resource and ResourceContainers, then three cross-table join are allowed. Custom join strategies, such as broadcast join, aren't allowed. For which tables can use join, go to Resource Graph tables.\nlimit List all public IP addresses Synonym of take. Doesn't work with Skip.\nmvexpand Legacy operator, use mv-expand instead. RowLimit max of 2,000. The default is 128.\nmv-expand List Azure Cosmos DB with specific write locations RowLimit max of 2,000. The default is 128. Limit of 3 mv-expand in a single query.\norder List resources sorted by name Synonym of sort\nparse Get virtual networks and subnets of network interfaces It's optimal to access properties directly if they exist instead of using parse.\nproject List resources sorted by name\nproject-away Remove columns from results\nsort List resources sorted by name Synonym of order\nsummarize Count Azure resources Simplified first page only\ntake List all public IP addresses Synonym of limit. Doesn't work with Skip.\ntop Show first five virtual machines by name and their OS type\nunion Combine results from two queries into a single result Single table allowed: | union [kind= inner|outer] [withsource=ColumnName] Table. Limit of three union legs in a single query. Fuzzy resolution of union leg tables isn't allowed. Might be used within a single table or between the Resources and ResourceContainers tables.\nwhere Show resources that contain storage\n\nThere's a default limit of three join and three mv-expand operators in a single Resource Graph SDK query. You can request an increase in these limits for your tenant through Help + support.\n\nTo support the Open Query portal experience, Azure Resource Graph Explorer has a higher global limit than Resource Graph SDK.\n\nNote\n\nYou can't reference a table as right table multiple times, which exceeds the limit of 1. If you do so, you would receive an error with code DisallowedMaxNumberOfRemoteTables.\n\nQuery scope\n\nThe scope of the subscriptions or management groups from which resources are returned by a query defaults to a list of subscriptions based on the context of the authorized user. If a management group or a subscription list isn't defined, the query scope is all resources, and includes Azure Lighthouse delegated resources.\n\nThe list of subscriptions or management groups to query can be manually defined to change the scope of the results. For example, the REST API managementGroups property takes the management group ID, which is different from the name of the management group. When managementGroups is specified, resources from the first 10,000 subscriptions in or under the specified management group hierarchy are included. managementGroups can't be used at the same time as subscriptions.\n\nExample: Query all resources within the hierarchy of the management group named My Management Group with ID myMG.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-03-01\n\n\nRequest Body\n\n{\n  \"query\": \"Resources | summarize count()\",\n  \"managementGroups\": [\"myMG\"]\n}\n\n\nThe AuthorizationScopeFilter parameter enables you to list Azure Policy assignments and Azure role-based access control (Azure RBAC) role assignments in the AuthorizationResources table that are inherited from upper scopes. The AuthorizationScopeFilter parameter accepts the following values for the PolicyResources and AuthorizationResources tables:\n\nAtScopeAndBelow (default if not specified): Returns assignments for the given scope and all child scopes.\nAtScopeAndAbove: Returns assignments for the given scope and all parent scopes, but not child scopes.\nAtScopeAboveAndBelow: Returns assignments for the given scope, all parent scopes, and all child scopes.\nAtScopeExact: Returns assignments only for the given scope; no parent or child scopes are included.\n\nNote\n\nTo use the AuthorizationScopeFilter parameter, be sure to use the 2021-06-01-preview or later API version in your requests.\n\nExample: Get all policy assignments at the myMG management group and Tenant Root (parent) scopes.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-06-01-preview\n\n\nRequest Body Sample\n\n{\n  \"options\": {\n    \"authorizationScopeFilter\": \"AtScopeAndAbove\"\n  },\n  \"query\": \"PolicyResources | where type =~ 'Microsoft.Authorization/PolicyAssignments'\",\n  \"managementGroups\": [\"myMG\"]\n}\n\n\nExample: Get all policy assignments at the mySubscriptionId subscription, management group, and Tenant Root scopes.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-06-01-preview\n\n\nRequest Body Sample\n\n{\n  \"options\": {\n    \"authorizationScopeFilter\": \"AtScopeAndAbove\"\n  },\n  \"query\": \"PolicyResources | where type =~ 'Microsoft.Authorization/PolicyAssignments'\",\n  \"subscriptions\": [\"mySubscriptionId\"]\n}\n\n\nEscape characters\n\nSome property names, such as those that include a . or $, must be wrapped or escaped in the query or the property name is interpreted incorrectly and doesn't provide the expected results.\n\nDot (.): Wrap the property name ['propertyname.withaperiod'] using brackets.\n\nExample query that wraps the property odata.type:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.['odata.type']\n\n\nDollar sign ($): Escape the character in the property name. The escape character used depends on the shell that runs Resource Graph.\n\nBash: Use a backslash (\\) as the escape character.\n\nExample query that escapes the property $type in Bash:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.\\$type\n\n\ncmd: Don't escape the dollar sign ($) character.\n\nPowerShell: Use a backtick (`) as the escape character.\n\nExample query that escapes the property $type in PowerShell:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.`$type\n\n\nNext steps\n\nAzure Resource Graph query language Starter queries and Advanced queries.\nLearn more about how to explore Azure resources.",
    "link": "https://learn.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language",
    "snippet": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL).",
    "title": "Understanding the Azure Resource Graph query language - Microsoft"
  }
]
s4 llm_format success 2026-03-01 22:55:50 → 2026-03-01 22:56:25
Input (130504 bytes)
[
  {
    "content_readable": "The X API uses pay-per-usage pricing. No subscriptions—pay only for what you use.\n\nHow it works\n\nCredit-based\n\nPurchase credits upfront in the Developer Console. Credits are deducted as you make API requests.\n\nPer-endpoint pricing\n\nDifferent endpoints have different costs. View current rates in the Developer Console.\n\nNo commitments\n\nNo contracts, subscriptions, or minimum spend. Start and stop anytime.\n\nReal-time tracking\n\nMonitor usage and costs live in the Developer Console.\n\nEarn free xAI API credits when you purchase X API credits—up to 20% back based on your spend. Learn more\n\nIf you are on a legacy subscription package (Basic or Pro), you can opt in to Pay-per-use pricing directly from the Developer Console. If you’d like to switch back to your legacy plan at any time, you can do so from the settings page within the Developer Console.\n\nDeduplication\n\nAll resources are deduplicated within a 24-hour UTC day window. If you request and are charged for a resource (such as a Post), requesting the same resource again within that window will not incur an additional charge. This means:\n\nRequesting the same Post multiple times in a day counts as one charge\nThe deduplication window resets at midnight UTC\nThis applies to all billable resources (Posts, users, etc.)\n\nDeduplication is a soft guarantee. While it occurs in the vast majority of cases, there may be specific edge cases like service outages that result in resources not being deduplicated.\n\nCredit balance\n\nYour credit balance is displayed in the Developer Console. Credits are deducted in real-time as you make API requests.\n\nMonitor your credit balance regularly to avoid service interruptions. Add credits before your balance reaches zero to ensure uninterrupted API access.Note: It is possible for an account credit balance to go slightly negative. In this case, API requests will be blocked until you add credits to cover the negative balance.\n\nAuto-recharge\n\nEnable auto-recharge to automatically top up your credit balance and avoid service interruptions. Configure this in the Developer Console:\n\nSetting\tDescription\nRecharge amount\tThe amount to add when auto-recharge triggers (e.g., $25)\nTrigger threshold\tAuto-recharge activates when your balance falls below this amount (e.g., $5)\n\nAuto-recharge requires a saved payment method set as your default. You can cancel anytime in the Developer Console or by contacting support.\n\nSpending limits\n\nSet a maximum amount you can spend per billing cycle to control costs. When the limit is reached, API requests will be blocked until the next billing cycle.\n\nOption\tDescription\nSpending limit\tSet a specific dollar amount as your maximum spend per billing cycle\n\nUse spending limits to prevent unexpected charges, especially during development and testing.\n\nFree xAI API Credits\n\nWhen you purchase X API credits, you can earn free xAI API credits based on your cumulative spend during a billing cycle.\n\nTo receive free xAI credits, you must link your xAI team to your X developer account. You can do this by visiting your account settings in the developer console.\n\nHow it works\n\nYour cumulative spend is tracked throughout each billing cycle. As you cross spending thresholds, you unlock higher reward rates. When a new billing cycle starts, your cumulative spend resets to $0.\n\nCumulative spend\tRate\n$0 – $199\t0%\n$200 – $499\t10%\n$500 – $999\t15%\n$1,000+\t20%\n\nThe rate applies to your entire cumulative balance, but you only receive the delta—what’s newly owed minus what was already credited.\n\nExample\n\nSuppose you make several purchases throughout a billing cycle:\n\nPurchase\tRate\tTotal owed\tAlready credited\tYou receive\n$100\t0%\t$0\t$0\t$0\n$100\t10%\t$20\t$0\t$20\n$150\t10%\t$35\t$20\t$15\n$150\t15%\t$75\t$35\t$40\n$250\t15%\t$112.50\t$75\t$37.50\n$250\t20%\t$200\t$112.50\t$87.50\n$1,000\t$200\n\nThis is the same amount you’d receive from a single $1,000 purchase—the order and size of purchases doesn’t affect your total rewards.\n\nMonitoring usage\n\nTrack your API usage programmatically with the Usage endpoint:\n\ncurl \"https://api.x.com/2/usage/tweets\" \\\n  -H \"Authorization: Bearer $BEARER_TOKEN\"\n\n\nThis returns daily Post consumption counts, helping you:\n\nTrack consumption against your budget\nSet up alerts when approaching limits\nIdentify high-consumption endpoints\nGenerate usage reports\n\nEnterprise pricing\n\nFor high-volume access with dedicated support, custom rate limits, and additional features, contact our enterprise sales team.\n\nPay-per-usage plans are subject to a monthly cap of 2 million Post reads. If you need higher volume, consider an Enterprise plan.\n\nNext steps",
    "link": "https://docs.x.com/x-api/getting-started/pricing",
    "snippet": "The X API uses pay-per-usage pricing. No subscriptions—pay only for what you use. View pricing \u0026 purchase credits ...",
    "title": "Pricing - X - X Developer Platform"
  },
  {
    "content_readable": "The X API provides programmatic access to X’s public conversation. Retrieve posts, analyze trends, build integrations, and create new experiences on the platform.\n\nWhat you can do\n\nCapability\tDescription\nRead posts\tSearch, look up, and stream posts in real-time\nPublish content\tCreate posts, replies, and threads\nManage users\tLook up users, manage follows, blocks, and mutes\nAnalyze data\tAccess metrics, trends, and engagement analytics\nBuild integrations\tSend DMs, manage lists, and interact with Spaces\n\nAPI versions\n\nX API v2 (Recommended)\n\nX API v1.1 (Legacy)\n\nEnterprise\n\nThe current version of the X API with modern features and flexible pricing.Why use v2:\n\nPay-per-usage pricing\nModern JSON response format\nFlexible fields and expansions\nAdvanced features: annotations, conversation tracking, edit history\nAll new endpoints and features\n\nGetting started:\n\nSign up at console.x.com\nCreate an app and get credentials\nMake your first request\n\nThe previous version of the X API. Limited support; use v2 for new projects.Still available:\n\nSome media upload endpoints\nLegacy streaming (deprecated)\nSome specialized endpoints\n\nMigrating to v2: See the migration guide for endpoint mapping and data format changes.\n\nHigh-volume access for businesses with advanced needs.Features:\n\nComplete firehose access\nHistorical data backfill\nDedicated support\nCustom rate limits\nCompliance streams\n\nContact enterprise sales →\n\nAvailable resources\n\nThe X API provides access to these resource types:\n\nPosts\n\nSearch, retrieve, create, and delete posts. Access timelines, threads, and quote posts.\n\nUsers\n\nLook up profiles, manage relationships, and access follower data.\n\nSpaces\n\nDiscover live audio conversations and participants.\n\nDirect Messages\n\nSend and receive private messages between users.\n\nLists\n\nCreate and manage curated lists of accounts.\n\nTrends\n\nAccess trending topics by location.\n\nv2 highlights\n\nFields and expansions\n\nRequest only the data you need. Use fields parameters to select specific attributes and expansions to include related objects.\n\ncurl \"https://api.x.com/2/tweets/123?tweet.fields=created_at,public_metrics\u0026expansions=author_id\u0026user.fields=username\" \\\n  -H \"Authorization: Bearer $TOKEN\"\n\n\nLearn more about fields →\n\nPost annotations\n\nPosts include semantic annotations identifying people, places, products, and topics. Filter streams and searches by topic.Learn more about annotations →\n\nEngagement metrics\n\nAccess public metrics (likes, reposts, replies) and private metrics (impressions, clicks) for your own posts.Learn more about metrics →\n\nConversation tracking\n\nEdit history\n\nAccess the edit history of posts, including all previous versions and edit metadata.Learn more about edit posts →\n\nPricing\n\nX API v2 uses pay-per-usage pricing:\n\nBenefit\tDescription\nNo subscriptions\tPay only for what you use\nCredit-based\tPurchase credits, deducted per request\nReal-time tracking\tMonitor usage in the Developer Console\nDeduplication\tSame resource requested twice in 24 hours is only charged once\n\nPay-per-usage plans are subject to a monthly cap of 2 million Post reads. If you need higher volume, consider an Enterprise plan.\n\nView pricing details →\n\nNext steps",
    "link": "https://docs.x.com/x-api/getting-started/about-x-api",
    "snippet": "The current version of the X API with modern features and flexible pricing.Why use v2: Pay-per-usage pricing; Modern JSON response format; Flexible fields and ...",
    "title": "About the X API"
  },
  {
    "content_readable": "Enter your monthly volume to estimate spend and compare GetXAPI per-call pricing against official X API pay-per-use rates.\n\nTweet requests / month\n\nGetXAPI: $0.001 / call\n\nOfficial X: $0.005 / read, $0.010 / write\n\nUser requests / month\n\nGetXAPI: $0.001 / call\n\nOfficial X: $0.010 / read, $0.015 / write\n\nDM requests / month\n\nGetXAPI: $0.002 / call\n\nOfficial X: $0.010 / read, $0.015 / write\n\nEstimated Monthly Cost at Your Volume\n\nTotal input volume: 75,000 requests / month\n\nProvider\tEstimated Monthly Spend\tEffective Cost / 1,000\tPricing Model\nGetXAPI\t$85.00\t$1.13\tPay per call (no caps)\nOfficial X API\t$675.00\t$9.00\tPay per use\n\nYou save $590.00/mo with GetXAPI (87% less than official X API)\n\nSources: official X API pay-per-use pricing from developer.x.com/#pricing. Official X costs above use read rates for comparison; write operations cost more ($0.010–$0.015/request). Pricing last verified on February 9, 2026.",
    "link": "https://www.getxapi.com/twitter-api-cost-calculator",
    "snippet": "Twitter API Cost Calculator. Enter your monthly volume to estimate spend and compare GetXAPI per-call pricing against official X API pay-per-use rates.",
    "title": "Twitter API Cost Calculator - GetXAPI"
  },
  {
    "content_readable": "The X API pricing has dramatically changed since 2023 – free access is effectively gone. This complete guide covers authentication, rate limits, optimization strategies, and real-world use cases for building scalable X integrations with confidence.\n\n3 weeks ago\n\nThe X API (formerly Twitter API) has undergone dramatic changes since Elon Musk’s acquisition in 2023. What was once a free, developer-friendly platform is now a premium service with strict pricing tiers and carefully controlled access levels. For developers building bots, integrating real-time data, or creating social media management tools, understanding the current X API landscape is critical.\n\nThis comprehensive guide walks you through everything you need to know about obtaining X API credentials in 2026, understanding actual costs, and optimizing your implementation for efficiency.\n\nEssential concepts covered:\n\nHow X API pricing evolved from free to paid and the emerging pay-per-use model\nCurrent tiers breakdown and which tier fits your use case\nStep-by-step process to get your API credentials from the Developer Portal\nModern authentication methods and permission scopes\nFive proven optimization strategies to reduce costs and improve performance\n\nLet’s start by understanding where the X API fits into your development workflow and what’s currently available.\n\nThe X API Evolution: What Changed\n\nThe Twitter API has evolved dramatically over the years. Here’s the timeline of major changes:\n\nDate Event Impact on Developers\nOctober 2022 Elon Musk acquires Twitter Speculation about API changes begins\nFebruary 2023 Free API access eliminated Third-party clients (Tweetbot, Echofon) shut down; pricing becomes mandatory\nMarch 2023 Paid tiers introduced ($100, $2,500, $42,000) Entry price jumps 100x; developer ecosystem fragments\nJune 2024 Basic tier pricing doubles to $200/month Increased barrier to entry for indie developers\nOctober 2024 Official rebrand: Twitter → X All documentation and branding updated; confusing for legacy users\nNovember 2025 Pay-per-use pricing beta launches New consumption-based model with $500 developer vouchers for testing\n\nFree access became $200–$5,000/month in four years. Before planning an implementation, understand what the API actually provides and which tier matches your needs.\n\nWhat Can You Build With the X API?\n\nThe X API enables programmatic access to X’s infrastructure—from retrieving data to publishing content to automating responses. Here are the most common applications:\n\nBrand Monitoring \u0026 Social Intelligence\n\nTrack mentions, competitor activity, and trending conversations in real-time. Filtered streams deliver instant alerts when specific keywords or accounts generate activity, enabling teams to respond quickly to brand-relevant events.\n\nContent Scheduling\n\nAutomate posting schedules, manage multiple accounts from a single dashboard, and coordinate content workflows. Agencies and creators use these tools to handle dozens of X accounts without manual login-and-post cycles.\n\nWebsite Content Integration\n\nEmbed live X feeds, individual tweets, and trending topics directly into websites. Publishers keep content synchronized with live X activity without requiring manual updates or outdated embeds.\n\nData Analysis and Research\n\nAccess structured data for large-scale studies, trend analysis, and market research. The API provides historical search, engagement metrics, and user data at volumes that would be impossible to collect manually.\n\nAI \u0026 Sentiment Analysis\n\nFeed real-time X data into machine learning models, language models, and sentiment analysis systems. Applications range from audience monitoring to discourse analysis to predictive analytics.\n\nX API Pricing: The 2026 Tier System\n\nAs of today, X is testing a revolutionary pay-per-use pricing model, but the traditional tier system remains the active standard. Here’s what you need to know about both approaches.\n\n💲 Current Standard Pricing\n\nThe tiered pricing structure consists of three main tiers, each designed for different scales of usage:\n\nTier\tMonthly Cost\tAnnual Savings\tBest For\tKey Capabilities\nFree\t$0\t—\tDevelopment and testing only\t500 posts/month, read-heavy, 1 req per 24hrs on most endpoints, limited endpoint access\nBasic\t$200\t$2,100/year (12.5% savings)\tSmall projects, content monitoring, single app usage\t15,000 read requests/month, 50,000 write requests/month, standard endpoint access\nPro\t$5,000\t$54,000/year (10% savings)\tGrowing applications, full feature set, mission-critical systems\t1,000,000 read requests/month, 300,000 write requests/month, full endpoint access, priority support\nEnterprise\t$42,000+\tCustom pricing\tLarge-scale systems, dedicated infrastructure\tCustom rate limits, SLAs, dedicated support, advanced features, volumetric discounts\n\nWhile Basic is 25x cheaper ($200 vs $5,000), Pro gives you 100x more read capacity and unlocks critical features like full-archive search and real-time filtering. Most companies scale directly from Free → Basic → Pro.\n\n💢 What Changed: The Death of Free Access\n\nThe shift from free to paid access served two purposes: generating revenue from the platform’s data value, and reducing abuse. Free API access enabled spam bots, data scrapers, and malicious automation at scale.\n\nAvailable with Free Tier\n\n500 posts per calendar month (about 16-17 per day)\nRate-limited to 1 request per 24 hours on most endpoints\nNo posting, liking, or engaging – read-only access to public data only\nCannot write posts, create resources, or perform account actions\nNo access to trends, direct messaging, or advanced features\n\nReal-world impact: The Free tier is genuinely only for proof-of-concept work and local development testing. For any production application, you must budget for the Basic tier at minimum ($200/month).\n\n🔮 The New Pay-Per-Use Model (Beta)\n\nIn November 2025, X launched a closed beta for a revolutionary pricing approach: pay only for what you use. Instead of fixed monthly fees, developers in the beta pay individual prices for different API operations – similar to AWS or Google Cloud’s consumption-based billing.\n\nHow Pay-Per-Use Works\n\nThe beta pricing model assigns specific costs to each operation type. For example:\n\nReading a post costs a specific price (varies by operation)\nSearching posts costs more (higher computational load)\nCreating a post has its own rate\nAccessing trends uses a different pricing tier\nDirect messaging has separate pricing\n\nImportant Note: The pay-per-use model is in closed beta as of December 2025. Plan your implementation based on current tier pricing, but monitor the official X Developer Twitter (@XDevelopers) for announcements about broader rollout.\n\nAll developers in the closed beta receive a $500 voucher to experiment before committing to production usage.\n\nPotential Benefits Over Fixed Tiers\n\nNo payment for unused capacity (unlike fixed tier pricing)\nAbility to scale up or down without tier changes\nGranular control over spending per feature\nMore transparent cost attribution\n\nX provides an interactive API cost calculator where you can input your expected usage patterns and see exactly what you’d pay.\n\nX Authentication: How to Prove Your Identity\n\nBefore making any API request, you need to authenticate – prove to X that you’re authorized to access specific data. The X API v2 supports multiple authentication methods, each suited for different scenarios.\n\n🔐 OAuth 2.0 Authorization Code (Recommended for New Development)\n\nOAuth 2.0 is the modern standard for authentication and is recommended for all new development. It’s more secure than legacy approaches and handles both public and private user data.\n\nWhen to Use OAuth 2.0\n\nBuilding new applications from scratch\nWeb applications and mobile apps requiring user login\nAccessing private user data (private lists, draft posts)\nPerforming actions on behalf of users (posting, liking, following)\n\nHow It Works\n\nUser clicks “Sign in with X” in your application\nYour app redirects them to X’s authorization page\nUser grants permissions (you define the scopes requested)\nX returns an authorization code\nYour app exchanges the code for an access token\nYou use this token for API requests on behalf of the user\n\nRequired credentials: Client ID, Client Secret, and redirect URI (configured in your developer app settings).\n\n🔑 OAuth 1.0a User Context (Legacy, Still Supported)\n\nThis older method is still supported but not recommended for new development. OAuth 1.0a authenticates on behalf of a specific user and is primarily useful for legacy applications.\n\nPosted tweets or direct messages on a user’s behalf\nRetrieving a specific user’s private timeline\nManaging user-specific resources\n\nWhy it’s less preferred: More complex to implement, less secure than OAuth 2.0, and X is gradually moving developers toward OAuth 2.0.\n\n👥 Bearer Token (App-Only, Best for Public Data)\n\nBearer token authentication is the simplest approach for accessing public data without user context. Use this when you’re building tools that only need public information.\n\nWhen to Use\n\nSearching for public posts\nRetrieving public user profiles\nAccessing publicly available trends\nBuilding analytics tools for public content\n\nHow it works: Provide your app’s credentials (API Key and Secret), receive a Bearer Token, include the token in API request headers. No user involvement required.\n\nSecurity Best Practice: Store all credentials (API Keys, Secrets, Bearer Tokens) in environment variables or secure configuration files – never hardcode them into your application code. If credentials are exposed, regenerate them immediately in the developer portal.\n\nX API v2: Endpoints and Resource Types\n\nThe X API comes in two versions: v1.1 (legacy, no longer updated) and v2 (current standard). All new projects should use v2, which provides access to endpoints organized by resource type – Posts, Users, Trends, Engagement, and more. Each resource supports specific operations (read, create, update, delete) depending on your tier and permissions.\n\nPosts (Tweets) – The Core Resource\n\nWhat you can do: Retrieve posts, search for posts matching criteria, create new posts, delete posts, access timelines\n\nCommon endpoints:\n\nGET /2/tweets — Lookup specific posts by ID\nGET /2/tweets/search/recent — Search recent posts (last 7 days)\nPOST /2/tweets — Create a new post\nGET /2/users/:id/tweets — Get posts from a specific user\n\nPosts are the foundation of the X API. Almost every use case involves retrieving, searching, or creating posts in some way.\n\nUsers – Profile Information\n\nWhat you can do: Access user profiles, get follower information, search for users\n\nCommon endpoints:\n\nGET /2/users/by/username/:username — Get user by handle\nGET /2/users/:id — Get user by ID\nGET /2/users/:id/followers — Get user’s followers\n\nUser endpoints let you build profiles, track followers, and verify account information without manually visiting X.\n\nEngagement – Likes, Retweets, Replies\n\nWhat you can do: See engagement metrics, track who liked or retweeted posts, manage user engagement\n\nCommon endpoints:\n\nGET /2/tweets/:id/liked_by — See who liked a post\nPOST /2/users/:id/likes — Like a post\nGET /2/tweets/:id/quote_tweets — Get quote tweets (retweets with added commentary)\n\nEngagement endpoints power analytics dashboards and community management tools by tracking interactions and responses to content.\n\nLists – User Collections\n\nWhat you can do: Create and manage curated lists of users, access posts from list members\n\nCommon endpoints:\n\nGET /2/lists — List your lists\nPOST /2/lists/:id/members — Add member to list\nGET /2/lists/:id/tweets — Get posts from list members\n\nLists are useful for organizing accounts and creating targeted feeds without following everyone publicly.\n\nTrends – What’s Happening Now\n\nWhat you can do: Access real-time trending topics and hashtags\n\nCommon endpoints:\n\nGET /2/trends — Get trending topics\nGET /2/users/personalized_trends — Get personalized trending topics for a user\n\nTrends data powers discovery features and helps applications surface relevant conversations happening right now on X.\n\nFiltered Stream – Real-Time Data\n\nWhat you can do: Subscribe to a real-time stream of posts matching your rules, receive notifications as posts are created\n\nCommon endpoints:\n\nGET /2/tweets/search/stream — Connect to filtered stream\nPOST /2/tweets/search/stream/rules — Create or modify stream rules\n\nFiltered stream is powerful for applications that need real-time updates (monitoring brand mentions, tracking specific keywords, etc.) without constantly polling the search endpoint.\n\nDirect Messages – Private Communication\n\nWhat you can do: Send and receive direct messages, manage conversations\n\nCommon endpoints:\n\nGET /2/dm_events — Retrieve direct messages\nPOST /2/dm_conversations/:id/messages — Send a message\n\nDirect message endpoints enable customer support automation and notification systems built on top of X.\n\nNote: Not all endpoints are available on all tiers. Free tier access is heavily restricted. The Basic tier ($200/month) provides access to most commonly used endpoints. Check the official X API documentation to verify endpoint availability for your tier before building features.\n\nRate Limits and Quota Management\n\nThe X API v2 enforces two types of limits: request rate limits (per 15-minute windows) and monthly post consumption limits (tracked across the calendar month).\n\n📨 Request Rate Limits (Per 15-Minute Windows)\n\nDifferent endpoints have different rate limits based on your tier.\n\nEndpoint Example\tFree Tier\tBasic Tier\tPro Tier\nGET /2/users/:id (lookup user)\t1 req / 24 hours\t100 requests / 24 hours\t900 requests / 15 mins\nPOST /2/tweets (create post)\tNot available\tAvailable\tAvailable\nGET /2/tweets/search/recent\tLimited\tAvailable\t450 requests / 15 mins\n\nFree tier uses per-endpoint limits measured in 24-hour windows (very restrictive). Basic and Pro tiers use 15-minute windows, which are much more generous because the window resets frequently.\n\n📊 Monthly Post Consumption Limits\n\nSeparate from request rate limits, search and stream endpoints consume from a monthly “post quota.” Once consumed, you can’t query these endpoints until the next calendar month.\n\nFree tier: 10,000 posts/month\nBasic tier: 500,000 posts/month\nPro tier: 2,000,000+ posts/month\n\nThese limits apply specifically to: recent search, filtered stream, user timelines, and mention timelines.\n\n🚨 What Happens When You Hit a Limit\n\nWhen you exceed a rate limit, X returns an HTTP 429 (Too Many Requests) error response with a Retry-After header indicating how many seconds to wait before retrying.\n\nWhen you exhaust your monthly post quota, X returns a 429 error indicating the quota limit is reached. You’re blocked from querying that endpoint until the next calendar month begins.\n\nBest Practice: Implement exponential backoff and retry logic in your application. When you receive a 429 error, wait the duration specified in Retry-After before retrying. For monthly quota exhaustion, cache your search results aggressively to avoid querying the same data repeatedly.\n\nFive Optimization Strategies: Reduce Costs and Improve Performance\n\nWith limited rate limits and monthly quotas, optimization directly impacts your application’s capability and cost. Here are proven strategies to reduce API consumption.\n\n1. Use Field Selection to Reduce Response Size\n\nBy default, API responses return many fields you might not need. The fields parameter lets you request only specific data.\n\nInstead of:\n\nGET /2/tweets?ids=TWEET_ID\n\nUse:\n\nGET /2/tweets?ids=TWEET_ID\u0026tweet.fields=created_at,public_metrics\u0026expansions=author_id\u0026user.fields=username\n\nThe second request returns only the data you need, resulting in smaller responses and faster processing.\n\n2. Implement Application-Level Caching\n\nCache API responses in your database or cache layer with appropriate TTL values:\n\nStatic content (usernames, display names): 24 hours\nSemi-dynamic content (post text, engagement counts): 6 hours\nReal-time content (trending topics): 30 minutes to 1 hour\n\nReal impact: A dashboard that previously fetched trending posts every 15 minutes can drop to every 2 hours with caching, reducing daily API calls from 96 to 12—an 87.5% reduction.\n\n3. Batch Requests Whenever Possible\n\nSome endpoints accept multiple IDs in a single request.\n\nInstead of 3 separate requests:\n\nGET /2/tweets?ids=ID1 GET /2/tweets?ids=ID2 GET /2/tweets?ids=ID3\n\nUse 1 batch request:\n\nGET /2/tweets?ids=ID1,ID2,ID3\n\nThis reduces your consumption from 3 requests to 1, saving 67% of your quota.\n\n4. Use Backoff and Retry Logic\n\nWhen hitting rate limits or temporary errors, retry with exponential backoff:\n\nWait 1 second before retry 1\nWait 2 seconds before retry 2\nWait 4 seconds before retry 3\nWait 8 seconds before retry 4\n\nThis prevents hammering the API and gives temporary issues time to resolve.\n\n5. Consider Filtered Stream Instead of Polling\n\nInstead of repeatedly asking “Are there new posts matching my criteria?” (polling), subscribe to webhooks where X pushes notifications when matching posts appear.\n\nPolling approach: Check every 5 minutes = 288 checks/day. Most checks return “no new data” (wasted quota).\n\nFiltered stream approach: Receive notification only when data changes. Zero wasted requests. Real-time updates.\n\nCombined Impact: Applying all five optimization strategies together can reduce your API consumption 70-90% compared to unoptimized code. A dashboard consuming 5,000 units daily can drop to 500-1,500 units through optimization alone, without requesting a quota increase.\n\nError Handling: Common Issues and Solutions\n\nUnderstanding common error codes helps you debug and recover gracefully.\n\nError Code\tHTTP Status\tCause\tSolution\nInvalid Request\t400\tMalformed request or missing required fields\tReview request format, ensure all required parameters present\nUnauthorized\t401\tMissing or invalid credentials\tCheck that Bearer Token or OAuth tokens are correct and not expired\nForbidden\t403\tAuthenticated but not authorized (insufficient permissions)\tRequest additional scopes in your OAuth flow, get user re-approval\nNot Found\t404\tResource doesn’t exist (invalid ID, deleted content)\tVerify resource ID is correct and still exists\nRate Limited\t429\tToo many requests within the time window\tImplement backoff, wait for rate limit window to reset (check Retry-After header)\nQuota Exceeded\t429\tMonthly post quota exhausted\tWait until next calendar month, or request quota increase\n\n🔧 Parsing Error Responses\n\nWhen an error occurs, X returns JSON with details:\n\n{ \"errors\": [ { \"message\": \"The `ids` query parameter value is invalid\", \"type\": \"https://api.x.com/2/problems/invalid-request\" } ] }\n\nBest practice: Always wrap API calls in try-catch blocks and log errors to a monitoring system. This helps you identify patterns and debug issues faster.\n\nGet Your X API Key: Step-by-Step\n\nThe process has simplified significantly compared to the old Twitter API, but there are still critical steps:\n\n🔗 Step 1: Create a Developer Account\n\nNavigate to X Developer Portal\nSign in with your X account (or create one)\nComplete developer profile setup\nAwait approval (typically 5-10 minutes)\n\nFirst-time users will see an onboarding wizard that guides you through creating your first Project and App. If you don’t see this, click “Projects \u0026 Apps” in the left sidebar.\n\n📂 Step 2: Create a Project\n\nA Project is a container for one or more Apps. Think of it as a workspace.\n\nIn the Developer Portal, click “Create Project”\nName your project (e.g., “Analytics Dashboard”)\nDescribe your use case\nSelect your access tier (start with Free for testing)\n\nBy default, you’re on the Free tier. To upgrade: Go to the “Products” section in the developer portal → Find the X API v2 card and click “View Access Levels” → Select the tier you want\n\n🔨 Step 3: Create an App\n\nWithin your project, click “Create App”\nChoose an App name (e.g., “Brand Monitor Bot”)\nAccept terms\nGenerate your API keys\n\n🔑 Step 4: Access Your Credentials\n\nNavigate to your app’s “Keys and Tokens” tab. You’ll find:\n\nAPI Key (Consumer Key): A public identifier for your app. Safe to share in source code.\nAPI Secret Key (Consumer Secret): Keep this secure! Never expose it in client-side code or version control.\nBearer Token (for app-only auth): Used for app-only authentication (read-only, no user context needed). Also keep secure.\nClient ID \u0026 Secret (for OAuth 2.0): OAuth 2.0 credentials. Only visible if you enable OAuth 2.0 in your app settings.\n\nCritical Security Warning: These credentials display only once. Copy them immediately to a secure location (password manager, encrypted file, environment variables). Never commit to version control or publish publicly. If exposed, regenerate immediately.\n\nRecommended Tools \u0026 Resources\n\nOfficial X API Documentation: The authoritative source for all endpoints, parameters, and examples.\nRate Limits Reference: Complete breakdown of all endpoint rate limits by tier.\nX Postman Collection: Pre-built API requests for testing in Postman. Eliminates manual endpoint crafting.\nX Developer Community Forum: Connect with other developers, ask questions, report issues.\nX Dev GitHub: Official sample code, SDKs, and libraries for Python, JavaScript, Java, and more.\nClient Libraries: Official and community-maintained SDKs in multiple languages. Saves time vs. raw HTTP requests.\n\nFAQ: Common Questions About the X API\n\nThe Free tier is available but extremely limited (500 posts/month, 1 request per 24 hours on most endpoints). It’s suitable only for development and proof-of-concept work. For production applications, the Basic tier ($200/month) is the practical minimum.\n\nOAuth 2.0 authenticates on behalf of a specific user and grants permission scopes. Bearer token (app-only) authenticates as your application to access public data. Use OAuth 2.0 when users need to login and grant permissions; use Bearer tokens for public data without user involvement.\n\nOAuth tokens don’t expire automatically—they remain valid until explicitly revoked or regenerated. Best practice: regenerate tokens every 90 days for security. If you suspect a token is compromised, regenerate immediately.\n\nYou receive an HTTP 429 response with a Retry-After header. Implement exponential backoff and retry after the specified duration. Your request is rejected, so no quota is consumed for failed attempts.\n\nYes. Submit a quota increase request through the Google Cloud Console. Provide your use case, user count, and realistic usage estimates. Google reviews and approves/denies based on compliance and legitimacy. Quota increases are free.\n\nFree tier: development and testing only. Basic ($200/month): most real-world projects (content monitoring, automation, small applications). Pro ($5,000/month): high-traffic applications, APIs serving many end users. Enterprise ($42k+): mission-critical systems requiring SLAs and dedicated support.\n\nNeed more help? Check the X Developer Documentation or visit the X Developer Community Forum to connect with other developers and get answers from the community.\n\nNext Steps\n\nBuilding with the X API is straightforward once you understand the pricing, rate limits, and optimization strategies. Whether you’re monitoring brand conversations, automating content, or analyzing trends, the API provides everything you need. Start with a small project, implement the five optimization strategies early, and grow from there.\n\nThe difference between a scalable application and one that struggles often comes down to implementation details. Plan thoroughly, optimize aggressively from day one, and your X integration will thrive. Ready to get started? Head to developer.x.com, create your first project, and begin building!\n\nSupport\n\nIf you have read the instructions but still have any questions, you can always contact our support specialists or read articles in the Help Center.\n\nAsk for help\n\nForum\n\nContact Elfsight peers, share your thoughts, and participate in community activities!\n\nJoin us\n\nWishlist\n\nVisit Wishlist to offer features that you need but the Form Builder doesn’t have yet.\n\nShare Your Idea\n\nHi, I’m Kristina – content manager at Elfsight. My articles cover practical insights and how-to guides on smart widgets that tackle real website challenges, helping you build a stronger online presence.",
    "link": "https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/",
    "snippet": "X provides an interactive API cost calculator where you can input your expected usage patterns and see exactly what you'd pay. X ...",
    "title": "How to Get X API Key: Complete 2026 Guide to Pricing ... - Elfsight"
  },
  {
    "content_readable": "Crawler is not allowed!",
    "link": "https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476",
    "snippet": "We're thrilled to officially announce the launch of our new X API Pay-Per-Use pricing model! This update is designed to empower the heart of our ...",
    "title": "Announcing the Launch of X API Pay-Per-Use Pricing"
  },
  {
    "content_readable": "Your premier source for technology news, insights, and analysis. Covering the latest in AI, startups, cybersecurity, and innovation.\n\nHAVE A TIP?\n\nSend us a tip using our anonymous form.\n\nHAVE QUESTIONS?\n\nReach out to us on any subject.\n\n© 2026 The Tech Buzz. All rights reserved.",
    "link": "https://www.techbuzz.ai/articles/x-tests-pay-per-use-api-model-to-win-back-developers",
    "snippet": "X's new API calculator lets developers estimate costs upfront, a transparency move that stands in stark contrast to the all-or-nothing tiers ...",
    "title": "X Tests Pay-Per-Use API Model to Win Back Developers"
  },
  {
    "content_readable": "Price Per Token\n\nIs ChatGPT Plus or Claude Pro worth it? Input your usage and instantly see whether a subscription or pay-per-token API access is cheaper for you.\n\nMessages per Day\n\nAPI Model\n\nMessage Length\n\nLonger prompt, thorough response\n\n800 input + 1500 output tokens/msg\n\nAt your usage, the API saves you $9.80/month\n\nYour estimated API cost is $10.20/mo compared to ChatGPT Plus at $20.00/mo. The API gives you full flexibility with no rate limits.\n\nNote: Subscriptions include features not available via API (web browsing, file uploads, custom GPTs, etc.)\n\nAPI\n\nBest Value\n\n$10.20/mo\n\nGPT-4o via API\n\nPay per token used\n\nNo rate limits\n\nFull API access\n\nChatGPT Free\n\nFree\n\nSaves $10.20/mo vs API\n\nMay exceed rate limits\n\nLimited GPT-4o access\n\nGPT-4o mini\nLimited GPT-4o\nWeb browsing\nLimited file uploads\n\nChatGPT Plus\n\n$20/mo\n\n$9.80/mo more than API\n\nGPT-4o\nGPT-4o mini\no1\no3-mini\n\nChatGPT Pro\n\n$200/mo\n\n$189.80/mo more than API\n\nUnlimited GPT-4o\nUnlimited o1\no1 pro mode\nUnlimited Advanced Data Analysis\n\nAt your usage, Claude Free saves you $14.94/month\n\nClaude Free costs $0.0000/mo vs $14.94/mo for API.\n\nNote: Subscriptions include features not available via API (web browsing, file uploads, custom GPTs, etc.)\n\nAPI\n\n$14.94/mo\n\nClaude Sonnet 4.5 via API\n\nPay per token used\n\nNo rate limits\n\nFull API access\n\nClaude Free\n\nBest Value\n\nFree\n\nSaves $14.94/mo vs API\n\nClaude Sonnet 4.5\nBasic web search\nLimited file uploads\n\nClaude Pro\n\n$20/mo\n\n$5.06/mo more than API\n\nClaude Sonnet 4.5\nClaude Opus 4\nExtended thinking\nProjects\n\nClaude Max 5x\n\n$100/mo\n\n$85.06/mo more than API\n\nEverything in Pro\n5x Pro usage limits\nHigher rate limits on all models\n\nClaude Max 20x\n\n$200/mo\n\n$185.06/mo more than API\n\nEverything in Pro\n20x Pro usage limits\nHighest rate limits\n\nFrequently Asked Questions\n\nCommon questions about subscription vs API pricing\n\nFollow us:\n\n2026 68 Ventures, LLC. All rights reserved.",
    "link": "https://pricepertoken.com/subscription-calculator",
    "snippet": "Your estimated API cost is $10.20/mo compared to ChatGPT Plus at $20.00/mo. The API gives you full flexibility with no rate limits. Note: Subscriptions include ...",
    "title": "Subscription vs API Cost Calculator - ChatGPT Plus \u0026 Claude Pro vs ..."
  },
  {
    "content_readable": "The Twitter API pricing saga has been a wild ride of extremes, and it looks like we might finally be heading toward some middle ground. According to recent announcements, Twitter (now X) is testing a pay-per-usage model that could dramatically reshape how developers and data scrapers interact with the platform.\n\nThe Pendulum Swings Back\n\nTwitter's API pricing history reads like a case study in how not to manage developer relations. The platform started with a completely free API that, while generous, created massive problems with abuse, scraping, and system strain. When Elon Musk took over, the pendulum swung hard in the opposite direction – suddenly, API access became prohibitively expensive for most developers and small businesses.\n\nThe result? A thriving underground economy of scrapers and unofficial API alternatives, along with frustrated developers who were priced out of legitimate access to Twitter data.\n\nPay-Per-Use: The Obvious Solution\n\nThe announcement hints at what many in the developer community have been calling for: a reasonable, pay-as-you-go pricing model. This approach makes intuitive sense for several reasons:\n\nScalability for Everyone: Small developers and researchers can access the API without massive upfront commitments, while larger enterprises pay proportionally for their usage.\nBetter Cost Control: Instead of paying for unused quota or being locked into expensive tiers, users pay only for what they actually consume.\nReduced Scraping Incentive: If official API access becomes affordable, the economic motivation to build and maintain scraping infrastructure diminishes significantly.\n\nThe Scraper's Dilemma\n\nFor those currently running Twitter scraping operations, this development presents an interesting calculation. Scraping Twitter has always been a cat-and-mouse game.\n\nYou're constantly dealing with rate limits, IP blocks, CAPTCHA systems, and constantly changing HTML structures. It's expensive to maintain and inherently unreliable.\n\nIf Twitter prices their pay-per-use API competitively, many scrapers might find it cheaper and more reliable to simply pay for official access. The question becomes: what constitutes \"competitively priced\"?\n\nWhat \"Reasonable\" Might Look Like\n\nFor a pay-per-use model to truly disrupt the scraping economy, it needs to be:\n\nTransparent: Clear pricing with no hidden fees or surprise charges\nGranular: Pay for exactly what you use, whether that's 100 requests or 100,000\nCompetitive: Priced low enough that it's cheaper than building and maintaining scraping infrastructure\nReliable: Stable pricing and terms that developers can build long-term plans around\n\nThe Bigger Picture\n\nThis shift could signal a broader maturation in how social media platforms think about data access. The all-or-nothing approaches of the past – either completely free or prohibitively expensive – haven't served anyone well.\n\nA well-implemented pay-per-use model could:\n\nReduce the technical arms race between platforms and scrapers\nEnable more legitimate research and business applications\nProvide platforms with sustainable revenue from data access\nCreate a healthier ecosystem for developers\n\nImpact on the Scraping Ecosystem\n\nIf Twitter gets this right, it could set a precedent for other social media platforms. The current ecosystem of scraping tools and services exists largely because official APIs are either unavailable, unreliable, or unaffordably priced.\n\nA shift toward reasonable pay-per-use pricing across major platforms could fundamentally change this landscape, potentially making legitimate API access the norm rather than the exception.\n\nLooking Forward\n\nThe scraping community is watching this development closely. Many scraper operators would probably prefer the predictability and reliability of official API access – if the price is right.\n\nFor now, it's a waiting game. The pilot program will provide the first real indication of whether Twitter has learned from their pricing missteps or if we're headed for another swing of the pendulum.",
    "link": "https://scrapecreators.com/blog/twitter-s-pay-per-use-api-could-this-finally-kill-the-scraping-economy",
    "snippet": "Better Cost Control: Instead of paying for unused quota or being locked into expensive tiers, users pay only for what they actually consume.",
    "title": "Twitter's Pay-Per-Use API: Could This Finally Kill the Scraping ..."
  },
  {
    "content_readable": "This is part one of the Advanced Use Cases series:\n\n1️⃣ Extract Metadata from Queries to Improve Retrieval\n\n2️⃣ Query Expansion\n\n3️⃣ Query Decomposition\n\n4️⃣ Automated Metadata Enrichment\n\nSometimes a single question is multiple questions in disguise. For example: “Did Microsoft or Google make more money last year?”. To get to the correct answer for this seemingly simple question, we actually have to break it down: “How much money did Google make last year?” and “How much money did Microsoft make last year?”. Only if we know the answer to these 2 questions can we reason about the final answer.\n\nThis is where query decomposition comes in. This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach:\n\nDecompose the original question into smaller questions that can be answered independently to each other. Let’s call these ‘sub questions’ here on out.\nReason about the final answer to the original question, based on each sub-answer.\n\nWhile for many query/dataset combinations, this may not be required, for some, it very well may be. At the end of the day, often one query results in one retrieval step. If within that one single retrieval step we are unable to have the retriever return both the money Microsoft made last year and Google, then the system will struggle to produce an accurate final response.\n\nThis method ensures that we are:\n\nretrieving the relevant context for each sub question.\nreasoning about the final answer given each answer based on the contexts retrieved for each sub question.\n\nIn this article, I’ll be going through some key steps that allow you to achieve this. You can find the full working example and code in the linked recipe from our cookbook. Here, I’ll only show the most relevant parts of the code.\n\n🚀 I’m sneaking something extra into this article. I saw the opportunity to try out the structured output functionality (currently in beta) by OpenAI to create this example. For this step, I extended the OpenAIGenerator in Haystack to be able to work with Pydantic schemas. More on this in the next step.\n\nLet’s try build a full pipeline that makes use of query decomposition and reasoning. We’ll use a dataset about Game of Thrones (a classic for Haystack) which you can find preprocessed and chunked on Tuana/game-of-thrones on Hugging Face Datasets.\n\nDefining our Questions Structure\n\nOur first step is to create a structure within which we can contain the subquestions, and each of their answers. This will be used by our OpenAIGenerator to produce a structured output.\n\nfrom pydantic import BaseModel\n\nclass Question(BaseModel):\n    question: str\n    answer: Optional[str] = None\n\nclass Questions(BaseModel):\n    questions: list[Question]\n\n\nThe structure is simple, we have Questions made up of a list of Question. Each Question has the question string as well as an optional answer to that question.\n\nDefining the Prompt for Query Decomposition\n\nNext up, we need to get an LLM to decompose a question and produce multiple questions. Here, we will start making use of our Questions schema.\n\nsplitter_prompt = \"\"\"\nYou are a helpful assistant that prepares queries that will be sent to a search component.\nSometimes, these queries are very complex.\nYour job is to simplify complex queries into multiple queries that can be answered\nin isolation to eachother.\n\nIf the query is simple, then keep it as it is.\nExamples\n1. Query: Did Microsoft or Google make more money last year?\n   Decomposed Questions: [Question(question='How much profit did Microsoft make last year?', answer=None), Question(question='How much profit did Google make last year?', answer=None)]\n2. Query: What is the capital of France?\n   Decomposed Questions: [Question(question='What is the capital of France?', answer=None)]\n3. Query: {{question}}\n   Decomposed Questions:\n\"\"\"\n\nbuilder = PromptBuilder(splitter_prompt)\nllm = OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions})\n\n\nAnswering Each Sub Question\n\nFirst, let’s build a pipeline that uses the splitter_prompt to decompose our question:\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question}})\nprint(result[\"llm\"][\"structured_reply\"])\n\n\nThis produces the following Questions (List[Question])\n\nquestions=[Question(question='How many siblings does Jamie have?', answer=None), \n           Question(question='How many siblings does Sansa have?', answer=None)]\n\n\nNow, we have to fill in the answer fields. For this step, we need to have a separate prompt and two custom components:\n\nThe CohereMultiTextEmbedder which can take multiple questions rather than a single one like the CohereTextEmbedder.\nThe MultiQueryInMemoryEmbeddingRetriever which can again, take multiple questions and their embeddings, returning question_context_pairs. Each pair contains the question and documents that are relevant to that question.\n\nNext, we need to construct a prompt that can instruct a model to answer each subquestion:\n\nmulti_query_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nAnd you have split the task into the following questions:\n{% for pair in question_context_pairs %}\n  {{pair.question}}\n{% endfor %}\n\nHere are the question and context pairs for each question.\nFor each question, generate the question answer pair as a structured output\n{% for pair in question_context_pairs %}\n  Question: {{pair.question}}\n  Context: {{pair.documents}}\n{% endfor %}\nAnswers:\n\"\"\"\n\nmulti_query_prompt = PromptBuilder(multi_query_template)\n\n\nLet’s build a pipeline that can answer each individual sub question. We will call this the query_decomposition_pipeline :\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\nquery_decomposition_pipeline.add_component(\"embedder\", CohereMultiTextEmbedder(model=\"embed-multilingual-v3.0\"))\nquery_decomposition_pipeline.add_component(\"multi_query_retriever\", MultiQueryInMemoryEmbeddingRetriever(InMemoryEmbeddingRetriever(document_store=document_store)))\nquery_decomposition_pipeline.add_component(\"multi_query_prompt\", PromptBuilder(multi_query_template))\nquery_decomposition_pipeline.add_component(\"query_resolver_llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"embedder.embeddings\", \"multi_query_retriever.query_embeddings\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"multi_query_retriever.queries\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"multi_query_retriever.question_context_pairs\", \"multi_query_prompt.question_context_pairs\")\nquery_decomposition_pipeline.connect(\"multi_query_prompt\", \"query_resolver_llm\")\n\n\nRunning this pipeline with the original question “Who has more siblings, Jamie or Sansa?”, results in the following structured output:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question}})\n\nprint(result[\"query_resolver_llm\"][\"structured_reply\"])\n\n\nquestions=[Question(question='How many siblings does Jamie have?', answer='2 (Cersei Lannister, Tyrion Lannister)'),\n           Question(question='How many siblings does Sansa have?', answer='5 (Robb Stark, Arya Stark, Bran Stark, Rickon Stark, Jon Snow)')]\n\n\nReasoning About the Final Answer\n\nThe final step we have to take is to reason about the ultimate answer to the original question. Again, we create a prompt that will instruct an LLM to do this. Given we have the questions output that contains each sub question and answer, we will make these inputs to this final prompt.\n\nreasoning_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nYou have split this question up into simpler questions that can be answered in\nisolation.\nHere are the questions and answers that you've generated\n{% for pair in question_answer_pair %}\n  {{pair}}\n{% endfor %}\n\nReason about the final answer to the original query based on these questions and\naswers\nFinal Answer:\n\"\"\"\n\nresoning_prompt = PromptBuilder(reasoning_template)\n\n\nTo be able to augment this prompt with the question answer pairs, we will have to extend our previous pipeline and connect the structured_reply from the previous LLM, to the question_answer_pair input of this prompt.\n\nquery_decomposition_pipeline.add_component(\"reasoning_prompt\", PromptBuilder(reasoning_template))\nquery_decomposition_pipeline.add_component(\"reasoning_llm\", OpenAIGenerator(model=\"gpt-4o-mini\"))\n\nquery_decomposition_pipeline.connect(\"query_resolver_llm.structured_reply\", \"reasoning_prompt.question_answer_pair\")\nquery_decomposition_pipeline.connect(\"reasoning_prompt\", \"reasoning_llm\")\n\n\nNow, let’s run this final pipeline and see what results we get:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question},\n                                           \"reasoning_prompt\": {\"question\": question}},\n                                           include_outputs_from=[\"query_resolver_llm\"])\n\nprint(\"The original query was split and resolved:\\n\")\n\nfor pair in result[\"query_resolver_llm\"][\"structured_reply\"].questions:\n  print(pair)\nprint(\"\\nSo the original query is answered as follows:\\n\")\nprint(result[\"reasoning_llm\"][\"replies\"][0])\n\n\n🥁 Drum roll please:\n\nThe original query was split and resolved:\n\nquestion='How many siblings does Jaime have?' answer='Jaime has one sister (Cersei) and one younger brother (Tyrion), making a total of 2 siblings.'\nquestion='How many siblings does Sansa have?' answer='Sansa has five siblings: one older brother (Robb), one younger sister (Arya), and two younger brothers (Bran and Rickon), as well as one older illegitimate half-brother (Jon Snow).'\n\nSo the original query is answered as follows:\n\nTo determine who has more siblings between Jaime and Sansa, we need to compare the number of siblings each has based on the provided answers.\n\nFrom the answers:\n- Jaime has 2 siblings (Cersei and Tyrion).\n- Sansa has 5 siblings (Robb, Arya, Bran, Rickon, and Jon Snow).\n\nSince Sansa has 5 siblings and Jaime has 2 siblings, we can conclude that Sansa has more siblings than Jaime.\n\nFinal Answer: Sansa has more siblings than Jaime.\n\n\nWrapping up\n\nGiven the right instructions, LLMs are good at breaking down tasks. Query decomposition is a great way we can make sure we do that for questions that are multiple questions in disguise.\n\nIn this article, you learned how to implement this technique with a twist 🙂 Let us know what you think about using structured outputs for these sorts of use cases. And check out the Haystack experimental repo to see what new features we’re working on.",
    "link": "https://haystack.deepset.ai/blog/query-decomposition",
    "snippet": "This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach.",
    "title": "Advanced RAG: Query Decomposition \u0026 Reasoning - Haystack"
  },
  {
    "content_readable": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and relative date parameters that are recognized in search queries.\n\nSeveral references on this page are not available in Simple Search. Switch to Advanced Search to access them.\n\nIssue Attributes\n\nEvery issue has base attributes that are set automatically by YouTrack. These include the issue ID, the user who created or applied the last update to the issue, and so on.\n\nThese search attributes represent an \u003cAttribute\u003e in the Search Query Grammar. Their values correspond to the \u003cValue\u003e or \u003cValueRange\u003e parameter.\n\nAttribute-based search uses the syntax attribute: value.\n\nYou can specify multiple values for the target attribute, separated by commas.\n\nExclude specific values from the search results with the syntax attribute: -value.\n\nIn many cases, you can omit the attribute and reference values directly with the # or - symbols. For additional guidelines, see Advanced Search.\n\nattachment text\n\nattachment text: \u003ctext\u003e\n\nReturns issues that include image attachments with the specified text.\n\nattachments\n\nattachments: \u003ctext\u003e\n\nReturns issues that include attachments with the specified filename.\n\nBoard\n\nBoard \u003cboard name\u003e: \u003csprint name\u003e\n\nReturns issues that are assigned to the specified sprint on the specified agile board. To find issues that are assigned to agile boards with sprints disabled, use has: \u003cboard name\u003e.\n\ncc recipients\n\ncc recipients: \u003cuser\u003e\n\nReturns tickets where the specified users are added as CCs.\n\ncode\n\ncode: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words that are formatted as code in the issue description or comments. This includes matches that are formatted as inline code spans, indented and fenced code blocks, and stack traces.\n\ncommented: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues to which comments were added on the specified date or within the specified period.\n\ncommenter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were commented by the specified user or by a member of the specified group.\n\ncomments: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in a comment.\n\ncreated\n\ncreated: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that were created on a specific date or within a specified time frame.\n\ndescription\n\ndescription: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in the issue description.\n\ndocument type\n\ndocument type: Issue | Ticket\n\nReturns either issue or ticket type documents.\n\nGantt\n\nGantt: \u003cchart name\u003e\n\nReturns issues that are assigned to the specified Gantt chart.\n\nhas\n\nhas: \u003cattribute\u003e\n\nThe has keyword functions as a Boolean search term. When used in a search query, it returns all issues that contain a value for the specified attribute. Use the minus operator (-) before the specified attribute to find issues that have empty values.\n\nFor example, to find all issues in the TST project that are assigned to the current user, have a duplicates link, have attachments, but do not have any comments, enter in: TST for: me has: duplicates , attachments , -comments.\n\nYou can use the has keyword in combination with the following attributes:\n\nAttribute\n\nDescription\n\nattachments\n\nReturns issues that have attachments.\n\nboards\n\nReturns issues that are assigned to at least one agile board. When used with an exclusion operator (-), returns issues that aren't assigned to any boards.\n\nBoard \u003cboard name\u003e\n\nReturns issues that are assigned to the specified agile board.\n\ncomments\n\nReturns issues that have one or more comments.\n\ndescription\n\nReturns issues that do not have an empty description.\n\n\u003cfield name\u003e\n\nReturns issues that contain any value in the specified custom field. Enclose field names that contain spaces in braces.\n\nGantt\n\nReturns issues that are assigned to any Gantt chart.\n\n\u003clink type name\u003e\n\nReturns issues that have links that match the specified outward name or inward name. Enclose link names that contain spaces in braces.\n\nFor example, to find issues that are linked as subtasks to parent issues, use:\n\nhas: {Subtask of}\n\nTo find issues that aren't linked to a parent issue, use:\n\nhas: -{Subtask of}\n\nlinks\n\nReturns issues that have any issue link type.\n\nstar\n\nReturns issues that have the star tag for the current user.\n\nunderestimation\n\nReturns issues where the total spent time is greater than the original estimation value.\n\nvcs changes\n\nReturns issues that contain vcs changes.\n\nvotes\n\nReturns issues that have one or more votes.\n\nwork\n\nReturns issues that have one or more work items.\n\nissue ID\n\nissue ID: \u003cissue ID\u003e, #\u003cissue ID\u003e\n\nReturns an issue that matches the specified issue ID. This attribute can also be referenced as a single value with the syntax #\u003cissue ID\u003e or -\u003cissue ID\u003e. When the search returns a single issue, the result is displayed in single issue view.\n\nIf you don't use the syntax for an attribute-based search (issue ID: \u003cvalue\u003e or #\u003cvalue\u003e), the input is also parsed as a text search. In addition to any issue that matches the specified issue ID, the search results include any issue that contains the specified ID in any text attribute.\n\nIf you set the issue ID in quotes, the input is only parsed as a text search. The search results only include issues that contain the specified ID in a text attribute.\n\nNote that even when an issue ID is parsed as a text search, the results do not include issue links. To find issues based on issue links, use the links attribute or reference a specific link type.\n\nlinks\n\nlinks: \u003cissue ID\u003e\n\nReturns all issues that contain links to the specified issue.\n\nlooks like\n\nlooks like: \u003cissue ID\u003e\n\nReturns issues in which the issue summary or description contains words that are found in the issue summary or description in the specified issue. Issues that contain matching words in the issue summary are given higher weight when the search results are sorted by relevance.\n\nmentioned in\n\nmentioned in: \u003cissue id\u003e\n\nReturns issues with issue IDs referenced in the description or a comment of the target issue. Issue IDs in supplemental text fields aren't included in the search results.\n\nmentions\n\nmentions: \u003cissue id\u003e, \u003cuser\u003e\n\nReturns issues that contain either @mention for the specified user or issue IDs referenced in the description or a comment. User mentions and issue IDs in supplemental text fields aren't included in the search results.\n\norganization\n\norganization: \u003corganization name\u003e\n\nReturns issues that belong to the specified organization. This attribute can also be referenced as a single value.\n\nproject\n\nproject: \u003cproject name\u003e | \u003cproject ID\u003e\n\nReturns issues that belong to the specified project. This attribute can also be referenced as a single value.\n\nreporter\n\nreporter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues and tickets that were created by the specified user or a member of the specified group, including tickets created on behalf of the specified user. Use me to return issues that were created by the current user.\n\nresolved date\n\nresolved date: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that were resolved on a specific date or within a specified time frame.\n\nsaved search\n\nsaved search: \u003csaved search name\u003e\n\nReturns issues that match the search criteria of a saved search. This attribute can also be referenced as a single value with the syntax #\u003csaved search name\u003e or -\u003csaved search name\u003e.\n\nsubmitter\n\nsubmitter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were submitted by the specified user or a member of the specified group on behalf of another user. Use me to return issues that were submitted by the current user.\n\nsummary\n\nsummary: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in the issue summary.\n\ntag\n\ntag: \u003ctag name\u003e\n\nReturns issues that match a specified tag. This attribute can also be referenced as a single value with the syntax #\u003ctag name\u003e or -\u003ctag name\u003e\n\nupdated\n\nupdated: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues where the most recent change occurred on a specific date or within a specified time frame.\n\nupdater\n\nupdater: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were last updated by the specified user or a member of the specified group. Use me to return issues to which you applied the last update.\n\nvcs changes\n\nvcs changes: \u003ccommit hash\u003e\n\nReturns issues that contain vcs changes that were applied in the commit object that is identified by the specified SHA-1 commit hash.\n\nvisible to\n\nvisible to: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that are visible to the specified user or a member of the specified group.\n\nvoter\n\nvoter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that have votes from the specified user or a member of the specified group.\n\nCustom Fields\n\nYou can find issues that are assigned specific values in a custom field. As with other issue attributes, you use the syntax attribute: value or attribute: -value. In this case, the attribute is the name of the custom field. In most cases, you can reference values directly with the # or - symbols.\n\nFor custom fields that are assigned an empty value, you can reference this property as a value. For example, to search for issues that are not assigned to a specific user, enter Assignee: Unassigned or #Unassigned. If the field is not assigned an empty value, find issues that do not store a value in the field with the syntax \u003cfield name\u003e: {No \u003cfield name\u003e} or has: -\u003cfield name\u003e.\n\nThis section lists the search attributes for default custom fields. Note that default fields and their values can be customized. The actual field names, values, and aliases may vary.\n\nAffected versions\n\nAffected versions: \u003cvalue\u003e\n\nReturns issues that were detected in a specific version of the product.\n\nAssignee\n\nAssignee: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns all issues that are assigned to the specified user or a member of the specified group.\n\nFix versions\n\nFix versions: \u003cvalue\u003e\n\nReturns issues that were fixed in a specific version of the product.\n\nFixed in build\n\nFixed in build: \u003cvalue\u003e\n\nReturns issues that were fixed in the specified build.\n\nPriority\n\nPriority: \u003cvalue\u003e\n\nReturns issues that match the specified priority level.\n\nState\n\nState: \u003cvalue\u003e | Resolved | Unresolved\n\nReturns issues that match the specified state.\n\nThe Resolved and Unresolved states cannot be assigned to an issue directly, as they are properties of specific values that are stored in the State field.\n\nBy default, Fixed, Won't fix, Duplicate, Incomplete, Obsolete, and Can't reproduce states are set as Resolved.\n\nThe Submitted, Open, In Progress, Reopened, and To be discussed states are set as Unresolved.\n\nSubsystem\n\nSubsystem: \u003cvalue\u003e\n\nReturns issues that are assigned to a specific subsystem within a project.\n\nType\n\nType: \u003cvalue\u003e\n\nReturns issues that match the specified issue type.\n\nIssue Links\n\nYou can search for issues based on the links that connect them to other issues. Search queries that reference a specific issue link type can be interpreted in two different ways:\n\nWhen specified as \u003clink type\u003e: \u003cissue ID\u003e, the query returns issues linked to the specified issue using this link type.\n\nUsing \u003clink type\u003e: (\u003csub-query\u003e), the query returns issues linked to any issue that matches the specified sub-query using this link type.\n\nWhen searching for linked issues, you can enter the outward name or inward name of any issue link type, then specify your search criteria.\n\nThis list contains search parameters for issue link types that are provided by default in YouTrack. The default issue link types can be customized, which means that the actual names may vary. You can also use this syntax to build search queries that refer to custom link types.\n\nlinks\n\nlinks: \u003cissue ID\u003e\n\nReturns issues that are linked to a target issue.\n\naggregate\n\naggregate \u003caggregation link type\u003e: \u003cissue ID\u003e\n\nReturns issues that are indirectly linked to a target issue. Use this search term to find, for example, issues that are parent issues for a parent issue or subtasks of issues that are also subtasks of a target issue. The results include any issue that is linked to the target issue using the specified link type, whether directly or indirectly.\n\nThis search argument is only compatible with aggregation link types.\n\nDepends on\n\nDepends on: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have depends on links to a target issue or any issue that matches the specified sub-query.\n\nDuplicates\n\nDuplicates: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have duplicates links to a target issue or any issue that matches the specified sub-query.\n\nIs duplicated by\n\nIs duplicated by: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have is duplicated by links to a target issue or any issue that matches the specified sub-query.\n\nIs required for\n\nIs required for: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have is required for links to a target issue or any issue that matches the specified sub-query.\n\nParent for\n\nParent for: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have parent for links to a target issue or any issue that matches the specified sub-query.\n\nRelates to\n\nRelates to: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have relates to links to a target issue or any issue that matches the specified sub-query.\n\nSubtask of\n\nSubtask of: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have subtask of links to a target issue or any issue that matches the specified sub-query.\n\nTime Tracking\n\nThere is a dedicated set of search attributes that you can use to find issues that contain time tracking data. These attributes look for specific values that have been added as work items to an issue.\n\nwork\n\nwork: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or phrase in a work item.\n\nwork author: \u003cuser\u003e\n\nReturns issues that have work items that were added by the specified user.\n\nwork type\n\nwork type: \u003cvalue\u003e\n\nReturns issues that have work items that are assigned the specified work type. The query work type: {No type} returns issues that have work items that are not assigned a work item type.\n\nwork date\n\nwork date: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that have work items that are recorded for the specified date or within the specified time frame.\n\ncustom work item attributes\n\nwork \u003cattribute name\u003e: \u003cattribute value\u003e\n\nReturns issues that have work items that are assigned the specified value for a specific work item attribute.\n\nSort Attributes\n\nYou can specify the sort order for the list of issues that are returned by the search query.\n\nYou can sort issues by any of the attributes on the following list. In the Search Query Grammar, these attributes represent the \u003cSortAttribute\u003e value.\n\nsort by\n\nsort by: \u003cvalue\u003e \u003csort order\u003e\n\nSorts issues that are returned by the query in the specified order.\n\nWhen you perform a text search, the results can be sorted by relevance. You cannot specify relevance as a sort attribute. For more information, see Sorting by Relevance.\n\nKeywords\n\nThere are a number of values that can be substituted with a keyword. When you use a keyword in a search query, you do not specify an attribute. A keyword is preceded by the number sign (#) or the minus operator. In the YouTrack Search Query Grammar, these keywords correspond to a \u003cSingleValue\u003e.\n\nme\n\nReferences the current user. This keyword can be used as a value for any attribute that accepts a user.\n\nWhen used as a single value (#me) the search returns issues that are assigned to, reported by, or commented by the current user.\n\nFor example, to find unresolved issues that are assigned to, reported by, or contain comments from the current user, enter #me -Resolved.\n\nThe results also include issues that contain references to the current user in any custom field that stores values as users. For example, you have a custom field Reviewed by that stores a user type. The search query #me -Resolved also includes issues that reference the current user in this custom field.\n\nmy\n\nAn alias for me.\n\nResolved\n\nThis keyword references the Resolved issue property. This property is set based on the current value or combination of values for any custom field that stores a state type. In the default State field, the Resolved property is enabled for the values Fixed, Won't fix, Duplicate, Incomplete, Obsolete, and Can't reproduce.\n\nFor projects that use multiple state-type fields, the Resolved property is only true when all the state-type fields are assigned values that are considered to be resolved.\n\nFor example, to find all resolved issues that were updated today, enter #Resolved updated: Today.\n\nUnresolved\n\nThis keyword references the Unresolved issue property. This property is set based on the current value or combination of values for any custom field that stores a state type. In the default State field, the Resolved property is disabled for the values Submitted, Open, In Progress, Reopened, and To be discussed.\n\nFor projects that use multiple state-type fields, the Unresolved property is true when any state-type field is assigned a value that is not considered to be resolved.\n\nFor example, to find all unresolved issues that are assigned to the user john.doe in the Test project, enter #Unresolved project: Test for: john.doe.\n\nReleased\n\nThis keyword references the Released property for values in a field that stores a version type. It can only be used together with the attribute name or alias for a version field. This means that it cannot be referenced as a single value.\n\nWith fields that store multiple values, the search query returns issues for which at least one of the versions that are stored in the field is marked as released.\n\nFor example, to find all issues in the Test project that are fixed in a version that has not yet been released, enter in: Test fixed in: -Released.\n\nArchived\n\nThis keyword references the Archived property for values in a field that stores a version type. It can only be used together with the attribute name or alias for a version field. This means that it cannot be referenced as a single value.\n\nWith fields that store multiple values, the search query only returns issues for which all the versions that are stored in the field are marked as archived.\n\nFor example, to find all issues in the Test project that are fixed in a version that has been archived, enter in: Test fixed in: Archived.\n\nOperators\n\nThe search query grammar applies default semantics to search queries that do not contain explicit logical operators.\n\nSearches that specify values for multiple attributes are treated as conjunctive. This means that the values are handled as if joined by an AND operator. For example, State: {In Progress} Priority: Critical returns issues that are assigned the specified state and priority.\n\nThis extends to queries that look for the presence or absence of a value for a specific attribute (has) in combination with a reverence to a specific value for the same attribute. The presence or absence of a value and the value itself are considered as separate attributes in the issue. For example, has: assignee Assignee: me only returns issues where the assignee is set and that assignee is you.\n\nFor text search, searches that include multiple words are treated as conjunctive. This means that the words are handled as if joined by an AND operator. For example, State: Open context usage returns issues that contain matching forms for both context and usage.\n\nSearches that include multiple values for a single attribute are treated as disjunctive. This means that the values are handled as if joined by an OR operator. For example, State: {In Progress}, {To be discussed} returns issues that are assigned either one or the other of these two states.\n\nYou can override the default semantics by applying explicit operators to the query.\n\nand\n\nThe AND operator combines matches for multiple search attributes to narrow down the search results. When you join search arguments with the AND operator, the resulting issues must contain matches for all the specified attributes. Use this operator for issue fields that store enum[*] types and tags.\n\nSearch arguments that are joined with an AND operator are always processed as a group and have a higher priority than other arguments that are joined with an OR operator in the query.\n\nHere are a few examples of search queries that contain AND operators:\n\nTo find issues in the Ktor project that are tagged as both Next build and to be tested, enter:\n\nin: Ktor and tag: {Next build} and tag: {to be tested}\n\nThe AND operator between the two tags ensures that the results only contain issues that have both tags.\n\nTo find all issues that are set as Critical priority in the Ktor project or are set as Major priority and are assigned to you in the Kotlin project, enter:\n\nin: Ktor #Critical or in: Kotlin #Major and for: me\n\nIf you were to remove the operators in this query, the references to the project and priority are parsed as disjunctive (OR) statements. The reference to the assignee (me) is then joined with a conjunctive (AND) statement. Instead of getting critical issues in the Ktor project plus a list of major-priority issues that you are assigned in Kotlin, you would only issues that are assigned to you that are either major or critical in either Ktor or Kotlin.\n\nor\n\nThe OR operator combines matches for multiple search attribute to broaden the search results.\n\nThis is very useful when searching for a term which has a synonym that might be used in an issue instead. For example, a search for lesson OR tutorial returns issues that contain matching forms for either \"lesson\" or \"tutorial\". If you remove the OR operator from the query, the search is performed conjunctively, which means the result would only include issues that contain matching forms for both words.\n\nHere's another example of a search query that contains an OR operator:\n\nTo find all issues in the Ktor project that are assigned to you or are tagged as to be tested in any project, enter:\n\nin: Ktor for: me or tag: {to be tested}\n\nParentheses\n\nUsing parentheses ( and ) combines various search arguments to change the order in which the attributes and operators are processed. The part of a search query inside the parentheses has priority and is always processed as a single unit.\n\nThe most common use of parentheses is to enclose two search arguments that are separated by an OR operator and further restrict the search results by joining additional search arguments with AND operators.\n\nAny time you use parentheses in a search query, you need to provide all the operators that join the parenthetical statement to neighboring search arguments. For example, the search query in: Kotlin #Critical (in: Ktor and for:me) cannot be processed. It must be written as in: Kotlin #Critical or (in: Ktor and for:me) instead.\n\nHere's an example of a search query that uses parentheses:\n\nTo find all issues that are assigned to you and are either assigned Critical priority in the Kotlin project or are assigned Major priority in the Ktor project, enter:\n\n(in: Kotlin #Critical or in: Ktor #Major) and for: me\n\nSymbols\n\nThe following symbols can be used to extend or refine a search query.\n\nSymbol\n\nDescription\n\nExamples\n\n-\n\nExcludes a subset from a set of search query results. When you use this symbol with a single value, do not use the number sign.\n\nTo find all unresolved issues except for issues with minor priority and sort the list of results by priority in ascending order, enter #unresolved -minor sort by: priority asc.\n\n#\n\nIndicates that the input represents a single value.\n\nTo find all unresolved issues in the MRK project that were reported by, assigned to, or commented by the current user, enter #my #unresolved in: MRK.\n\n,\n\nSeparates a list of values for a single attribute. Can be used in combination with a range.\n\nTo find all issues assigned to, reported or commented by the current user, which were created today or yesterday, enter #my created: Today, Yesterday.\n\n..\n\nDefines a range of values. Insert this symbol between the values that define the upper and lower ranges. The search results include the upper and lower bounds.\n\nTo find all issues fixed in version 1.2.1 and in all versions from 1.3 to 1.5, enter fixed in: 1.2.1, 1.3 .. 1.5.\n\nTo find all issues created between March 10 and March 13, 2018, enter created: 2018-03-10 .. 2018-03-13.\n\n*\n\nWildcard character. Its behavior is context-dependent.\n\nWhen used with the .. symbol, substitutes a value that determines the upper or lower bound in a range search. The search results are inclusive of the specified bound.\n\nWhen used in an attribute-based search, matches zero or more characters at the end of an attribute value. For more information, see Wildcards in Attribute-based Search.\n\nWhen used in text search, matches zero or more characters in a string. For more information, see Wildcards in Text Search.\n\nTo find all issues created on or before March 10, 2018, enter created: * .. 2018-03-10\n\nTo find issues that have tags that start with refactoring, enter tag: refactoring*.\n\nTo find unresolved issues that contain image attachments in PNG format, enter #Unresolved attachments: *.png.\n\n?\n\nMatches any single character in a string. You can only use this wildcard to search in attributes that store text. For more information, see Wildcards in Text Search.\n\nTo find issues that contain the words \"prioritize\" or \"prioritise\" in the issue description, enter description: prioriti?e\n\n{ }\n\nEncloses attribute values that contain spaces.\n\nTo find all issues with the Fixed state that have the tag to be tested, enter #Fixed tag: {to be tested}.\n\nDate and Period Values\n\nSeveral search attributes reference values that are stored as a date. You can search for dates as single values or use a range of values to define a period.\n\nSpecify dates in the format: YYYY-MM-DD or YYYY-MM or MM-DD. You also can specify a time in 24h format: HH:MM:SS or HH:MM. To specify both date and time, use the format: YYYY-MM-DD}}T{{HH:MM:SS. For example, the search query created: 2010-01-01T12:00 .. 2010-01-01T15:00 returns all issues that were created on 1 January 2010 between 12:00 and 15:00.\n\nPredefined Relative Date Parameters\n\nYou can also use pre-defined relative parameters to search for date values. The values for these parameters are calculated relative to the current date according to the time zone of the current user. The actual value for each parameter is shown in the query assist panel.\n\nThe following relative date parameters are supported:\n\nParameter\n\nDescription\n\nNow\n\nThe current instant.\n\nToday\n\nThe current calendar day.\n\nTomorrow\n\nThe next calendar day.\n\nYesterday\n\nThe previous calendar day.\n\nSunday\n\nThe calendar Sunday for the current week.\n\nMonday\n\nThe calendar Monday for the current week.\n\nTuesday\n\nThe calendar Tuesday for the current week.\n\nWednesday\n\nThe calendar Wednesday for the current week.\n\nThursday\n\nThe calendar Thursday for the current week.\n\nFriday\n\nThe calendar Friday for the current week.\n\nSaturday\n\nThe calendar Saturday for the current week.\n\n{Last working day}\n\nThe most recent working day as defined by the Workdays that are configured in the settings on the Time Tracking page in YouTrack.\n\n{This week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the current week.\n\n{Last week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the previous week.\n\n{Next week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the next week.\n\n{Two weeks ago}\n\nThe period from 00:00 Monday to 23:59 Sunday for the calendar week two weeks prior to the current date.\n\n{Three weeks ago}\n\nThe period from 00:00 Monday to 23:59 Sunday for the calendar week three weeks prior to the current date.\n\n{This month}\n\nThe period from the first day to the last day of the current calendar month.\n\n{Last month}\n\nThe period from the first day to the last day of the previous calendar month.\n\n{Next month}\n\nThe period from the first day to the last day of the next calendar month.\n\nOlder\n\nThe period from 1 January 1970 to the last day of the month two months prior to the current date.\n\nCustom Date Parameters\n\nIf the predefined date parameters don't help you find issues that matter most to you, define your own date range in your search query. Here are a few examples of the queries you can write with custom date parameters:\n\nFind issues that have new comments added in the last seven days:\n\ncommented: {minus 7d} .. Today\n\nFind issues that were updated in the last two hours:\n\nupdated: {minus 2h} .. *\n\nFind unresolved issues that are at least one and a half years old:\n\ncreated: * .. {minus 1y 6M} #Unresolved\n\nFind issues that are due in five days:\n\nDue Date: {plus 5d}\n\nTo define a custom time frame in your search queries, use the following syntax:\n\nTo specify dates or times in the past, use minus.\n\nTo specify dates or times in the future, use plus.\n\nSpecify the time frame as a series of whole numbers followed by a letter that represents the unit of time. Separate each unit of time with a space character. For example:\n\n2y 3M 1w 2d 12h\n\nQueries that specify hours will filter for events that took place during the specified hour. For example, if it is currently 15:35, a query that is written as created: {minus 48h} returns issues that were created two days ago, at any time between 3 and 4 PM. Meanwhile, a query that is written as created: {minus 2d} returns all issues that were created two days ago at any time between midnight and 23:59.\n\nThis level of precision only applies to hours. A query that references the unit of time as 14d returns exactly the same results as 2w.\n\nSearch queries that specify units of time shorter than one hour (minutes, seconds) are not supported.\n\nSearch Query Grammar\n\nThis page provides a BNF description of the YouTrack search query grammar.\n\n\u003cSearchRequest\u003e ::= \u003cOrExpression\u003e \u003cOrExpession\u003e ::= \u003cAndExpression\u003e ('or' \u003cAndExpression\u003e)* \u003cAndExpression\u003e ::= \u003cAndOperand\u003e ('and' \u003cAndOperand\u003e)* \u003cAndOperand\u003e ::= '('\u003cOrExpression\u003e? ')' | Term \u003cTerm\u003e ::= \u003cTermItem\u003e* \u003cTermItem\u003e ::= \u003cQuotedText\u003e | \u003cNegativeText\u003e | \u003cPositiveSingleValue\u003e | \u003cNegativeSingleValue\u003e | \u003cSort\u003e | \u003cHas\u003e | \u003cCategorizedFilter\u003e | \u003cText\u003e \u003cCategorizedFilter\u003e ::= \u003cAttribute\u003e ':' \u003cAttributeFilter\u003e (',' \u003cAttributeFilter\u003e)* \u003cAttribute\u003e ::= \u003cname of issue field\u003e \u003cAttributeFilter\u003e ::= ('-'? \u003cValue\u003e ) | ('-'? \u003cValueRange\u003e) | \u003cLinkedIssuesQuery\u003e \u003cLinkedIssuesQuery\u003e ::= ( \u003cOrExpression\u003e ) \u003cValueRange\u003e ::= \u003cValue\u003e '..' \u003cValue\u003e \u003cPositiveSingleValue\u003e ::= '#'\u003cSingleValue\u003e \u003cNegativeSingleValue\u003e ::= '-'\u003cSingleValue\u003e \u003cSingleValue\u003e ::= \u003cValue\u003e \u003cSort\u003e ::= 'sort by:' \u003cSortField\u003e (',' \u003cSortField\u003e)* \u003cSortField\u003e ::= \u003cSortAttribute\u003e ('asc' | 'desc')? \u003cHas\u003e ::= 'has:' \u003cAttribute\u003e (',' \u003cAttribute\u003e)* \u003cQuotedText\u003e ::= '\"' \u003ctext without quotes\u003e '\"' \u003cNegativeText\u003e ::= '-' \u003cQuotedText\u003e \u003cText\u003e ::= \u003ctext without parentheses\u003e \u003cValue\u003e ::= \u003cComplexValue\u003e | \u003cSimpleValue\u003e \u003cSimpleValue\u003e ::= \u003cvalue without spaces\u003e \u003cComplexValue\u003e ::= '{' \u003cvalue (can have spaces)\u003e '}'\n\nGrammar is case-insensitive.\n\nFor a complete list of search attributes, see Issue Attributes.\n\nTo see sample queries for common use cases, see Sample Search Queries.\n\n11 November 2025",
    "link": "https://www.jetbrains.com/help/youtrack/cloud/search-and-command-attributes.html",
    "snippet": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and ...",
    "title": "Search Query Reference | YouTrack Cloud Documentation - JetBrains"
  },
  {
    "content_readable": "Introduced in 2020, the GitHub user profile README allow individuals to give a long-form introduction. This multi-part tutorial explains how I setup my own profile to create dynamic content to aid discovery of my projects:\n\nwith the Liquid template engine and Shields (Part 1 of 4)\nusing GitHub's GraphQL API to query dynamic data about all my repos (keep reading below)\nfetching RSS and Social cards from third-party sites (Part 3 of 4)\nautomating updates with GitHub Actions (Part 4 of 4)\n\nYou can visit github.com/j12y to see the final result of what I came up with for my own profile page.\n\nThe GitHub Repo Gallery\n\nThe intended behavior for my repo gallery is to create something similar to pinned repositories but with a bit more visual pizzazz to identify what the projects are about.\n\nIn addition to source code, the repo can have metadata associated with it:\n\n✔️ Name of the repository\n✔️ Short description of the project\n✔️ Programming language used for the project\n✔️ List of tags / topics\n✔️ Image that can be used for social cards\n\nAbout\n\nThe About has editable fields to set the description and topics.\n\nSettings\n\nThe Settings includes a place to upload an image for social media preview cards.\n\nIf you don't set a preview card image, GitHub will generate one automatically that includes some basic profile statistics and your user profile image.\n\nGetting Started with the GitHub REST API\n\nThe way I structured this project is to build a library of any functions related to querying GitHub in src/gh.ts. I used a .env file to store my personal access (classic) token for authentication during local development.\n\n├── package.json\n├── .env\n├── src\n│   ├── app.ts\n│   ├── gh.ts\n│   └── template\n│       ├── README.liquid\n│       ├── contact.liquid\n│       └── gallery.liquid\n└── tsconfig.json\n\n\nI started by using REST endpoints with the Octokit library and TypeScript bindings.\n\n// src/gh.ts\nimport { Octokit } from 'octokit';\nimport { RestEndpointMethodTypes } from '@octokit/plugin-rest-endpoint-methods'\nconst octokit = new Octokit({ auth: process.env.TOKEN});\n\nexport class GitHub {\n    // GET /users/{user}\n    // https://docs.github.com/en/rest/users/users#get-a-user\n    async getUserDetails(user: string): Promise\u003cRestEndpointMethodTypes['users']['getByUsername']['response']['data']\u003e {\n        const { data } = await octokit.rest.users.getByUsername({\n            username: user\n        });\n\n        return data;\n    };\n}\n\n\nFrom src/app.ts I initialize the GithHub class, fetch the results, and can inspect the data being returned as a way to get comfortable with the various endpoints.\n\n// src/app.ts\nimport dotenv from 'dotenv';\nimport { GitHub } from \"./gh\";\n\nexport async function main() {\n  dotenv.config();\n  const gh = new GitHub()\n\n  const details = await gh.getUserDetails();\n  console.log(details);\n}\nmain();\n\n\nI typically get started on projects with simple tests like this to make sure all the various pieces to an integration can be configured and work together before getting too far.\n\nUse the GitHub GraphQL Endpoint\n\nTo get the data needed for the gallery layout, it would be necessary to make multiple calls to REST endpoints. In addition there is some data not yet available from the REST endpoint at all.\n\nSwitching to query using the GitHub GraphQL interface becomes helpful. This single endpoint can process a number of queries and give precise control over the data needed.\n\n💡 The GitHub GraphQL Explorer was fundamentally useful for me to get the right queries defined\n\nThis query needs authorization with the personal access token to fetch profile details about followers similar to some of the details returned from the REST endpoints.\n\n// src/gh.ts\n\nconst { graphql } = require(\"@octokit/graphql\")\n\nexport class GitHub \n    // https://docs.github.com/en/graphql\n    graphqlWithAuth = graphql.defaults({\n        headers: {\n            authorization: `token ${process.env.TOKEN}`\n        }\n    })\n\n    async getProfileOverview(name: string): Promise\u003cany\u003e {\n        const query = `\n            query getProfileOverview($name: String!) { \n                user(login: $name) { \n                    followers(first: 100) {\n                        totalCount\n                        edges {\n                            node {\n                                login\n                                name\n                                twitterUsername\n                                email\n                            }\n                        }\n                    }\n                }\n            }\n        `;\n        const params = {'name': name};\n\n        return await this.graphqlWithAuth(query, params);\n    }\n}\n\n\nThere are other resources such as Learn GraphQL if you haven't written many queries yet which explains the basics around syntax, schemas, and types.\n\nGetting used to GitHub's GraphQL schema primarily involves walking a series of edges to find linked nodes for objects of interest and their data attributes. In this case, I started by querying a user profile, finding the list of linked followers, and then inspecting their corresponding node's login, name, and email address.\n\n   ┌────────────┐\n   │    user    │\n   └─────┬──────┘\n         │\n         └──followers\n               │\n               ├─── totalCount\n               │\n               └─── edges\n                     │\n                     └── node\n\n\n\nFaceted Search by Topic Frequency\n\nI often want to find repositories by a topic. The user interface makes it easy to filter among many repositories by programming language such as python but unless you know which topics are relevant can become hit or miss. Was it nlp or nltk I used to categorize related repositories. Did I use dolby or dolbyio to identify repos I have for work projects.\n\nA faceted search that narrows down the number of matching repositories can be helpful for finding relevant projects like this. Given topics on GitHub are open-ended and not constrained to fixed values, it can be easy to accidentally categorize repos with variations like lambda and aws-lambda such that searches only identify partial results.\n\nTo address this, a GraphQL query gathering topics by frequency of usage within an organization or individual account can help with identifying the most useful topics.\n\nThe steps for this would be:\n\nQuery repository topics\nProcess results to group topics by frequency\nUse a template to render the gallery\n\n1 - Query Repository Topics\n\nI used the following GraphQL query to fetch my repositories and their corresponding topics.\n\nconst query = `\n    query getReposOverview($name: String!) {\n        user(login: $name) {\n            repositories(first: 100 ownerAffiliations: OWNER) {\n                edges {\n                    node {\n                        name\n                        url\n                        description\n                        openGraphImageUrl\n                        repositoryTopics(first: 100) {\n                            edges {\n                                node {\n                                    topic {\n                                        name\n                                    }\n                                }\n                            }\n                        }\n                        primaryLanguage {\n                            name\n                        }\n                    }\n                }\n            }\n        }\n    }\n`;\n\n\nThis query starts by filtering by user owned repositories (not counting forks) along with the metadata such as the social image.\n\n2 - Process Results and Group Topics by Frequency\n\nIterating over the results of the query the convention used was to look for anything with the topic github-gallery as something to be featured in the gallery. We also get a count of usage for each of the other topics and programming languages.\n\nvar topics: {[id: string]: number } = {};\nvar languages: {[id: string]: number } = {};\nvar gallery: {[id: string]: any } = {};\n\nconst repos = await gh.getReposOverview(user);\nfor (let repo of repos.user.repositories.edges) {\n  // Count occurrences of each topic\n  repo.node.repositoryTopics.edges.forEach((topic: any) =\u003e {\n    if (topic.node.topic.name == 'github-gallery') {\n      gallery[repo.node.name] = repo;\n    } else {\n      topics[topic.node.topic.name] = topic.node.topic.name in topics ? topics[topic.node.topic.name] + 1 : 1;\n    }\n  });\n\n  // Count and include count of language used\n  if (repo.node.primaryLanguage) {\n    languages[repo.node.primaryLanguage.name] = repo.node.primaryLanguage.name in languages ? languages[repo.node.primaryLanguage.name] + 1 : 1;\n  }\n}\n\n\n3 - Use a template to render the gallery\n\nThe topics are ordered by how often they are used. From the previous post on setting up a dynamic profile, I'm passing scope to the liquid engine for any data to be made available in a template.\n\n  // Share topics sorted by frequency of use for filtering repositories\n  // from the organization\n  scope['topics'] = Object.entries(topics).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n  scope['languages'] = Object.entries(languages).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n\n  // Gather topics across repos\n  scope['gallery'] = Object.values(gallery);\n\n\n\nThe repository page on GitHub uses query parameters to sort and filter, so items like topic:nltk can be passed directly in the URL to load a filtered view of repositories. The shields create a nice looking button for navigating to the topic, and use of icons for programming languages helps find relevant code samples.\n\n\u003cp\u003eExplore some of my projects: \u003cbr/\u003e\n{% for language in languages %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=language%3A{{language[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ language[0] }}-{{ language[1] }}-lightgrey?logo={{ language[0] }}\u0026label={{ language[0] }}\u0026labelColor=000000\" alt=\"{{ language[0] }}\"/\u003e\u003c/a\u003e {% endfor %}\n{% for topic in topics %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/static/v1?label={{topic[0]}}\u0026message={{ topic[1] }}\u0026labelColor=blue\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\n\nThe presentation includes a 3-column row in a table for displaying the metadata about each featured gallery project. This could display all repositories, but limiting to one or two rows seems sensible for managing screen space.\n\n{% for tile in gallery limit:3 %}\n\u003ctd width=\"25%\" valign=\"top\" style=\"padding-top: 20px; padding-bottom: 20px; padding-left: 30px; padding-right: 30px;\"\u003e\n\u003ca href=\"{{ tile.node.url }}\"\u003e\u003cimg src=\"{{ tile.node.openGraphImageUrl }}\"/\u003e\u003c/a\u003e\n\u003cp\u003e\u003cb\u003e\u003ca href=\"{{ tile.node.url }}\"\u003e{{ tile.node.name }}\u003c/b\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e{{ tile.node.description }}\u003cbr/\u003e\n{% for topic in tile.node.repositoryTopics.edges %} \u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic.node.topic.name }}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ topic.node.topic.name | replace: \"-\", \"--\" }}-blue?style=pill\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\u003c/td\u003e\n{% endfor %}\n\n\nWith all of that put together, we now have a gallery that displays a picture along with the name, description, and tags. The picture can highlight a user interface, architectural diagram, or some other branded visual to help identify the purpose of the project visually.\n\nWe can also use this to maintain our list of topics and make finding relevant topics for an audience easier to discover.\n\nLearn more\n\nI hope this overview helps with getting yourself sorted. The next article will dive into some of the other ways of aggregating content.\n\nFetching RSS and Social Cards for GitHub Profile (Part 3 of 4)\nAutomating GitHub Profile Updates with Actions (Part 4 of 4)\n\nDid this help you get your own profile started? Let me know and follow to get notified about updates.",
    "link": "https://dev.to/j12y/query-github-repo-topics-using-graphql-35ha",
    "snippet": "Creating a customized user profile page for GitHub to showcase work projects and make navigation to relevant topics easier.",
    "title": "Query GitHub Repo Topics Using GraphQL - DEV Community"
  },
  {
    "content_readable": "Updated\n\n4 days ago\n\nWith millions of conversations happening all over the web each day, it can be a long and tedious task trying to get more relevant mentions and tighten the scope of your query, but with the help of Advanced Topic Query, it can be at your fingertips.\n\nIn Social Listening, you have the option to create an advanced query that is not limited to ANY, ALL, or NONE formatting of query building. Advanced query builder can be used to form complex text queries which are not possible with a normal query builder.\n\nWhat is an Advanced Topic Query?\n\nAdvanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more.\n\nBy using advanced query you can pinpoint relevant information which is not possible with basic topic query.\n\nIt gives you the power to find the needle in a haystack.\n\n​\n\nBasic Topic Query v/s Advanced Topic Query\n\nWith more operators to use you can fetch conversations by language, geography, social media channel, volume, author, #listening, @account monitoring, user segment, and much more, it can give you access to more actionable insights.\n\nIn Basic Query, you can only use boolean operators like OR/ NOT/ AND/ along with NEAR. On the other hand, in Advanced Topic Query, it gives you access to use OR with/ inside AND, NOT (nested and within operator use cases), advanced operators, exact match operators etc.\n\nLet's see the use cases where advanced query will help in getting more insightful mentions –\n\nUse case #1: To search \"pepsi\" OR \"drink\" along with \"cups\".\n\nBasic Query\n\nAdvancd Query\n\nUse case #2: To get mentions of \"pepsi\" along with \"coke\" or \"sprite\" but not \"miranda\" with people having \"follower count\" between 100 to 1000 on \"twitter\".\n\nBasic Query\n\nAdvanced Query\n\nNot feasible in the basic Topic query\n\nThis is where we need the advanced Topic query.​\n\nHow to create an advanced Topic query?\n\nClick the New Tab icon. Under Sprinklr Insights, Click Topics within Listening.\n\nOn the Topics window, click Add Topic in the top right corner. Fill in the required fields and click Create.\n\nIn the Setup Query tab of Create New Topic window, select Advanced Query in the query section.\n\n​\n\nType your query in the Advanced Query field with the required operators and syntax.\n\nClick Save.\n\nTip: While using Instagram as a Listening Source, be sure that your query keywords include hashtags.\n\nWhich operators to use for building Topic queries?\n\nOperators for Topic queries\n\nIn creation of advanced queries along with boolean operators OR/ AND/ NOT/ etc, Sprinklr also supports operator types –\n\nSearch Operators\n\nExact Match Operators\n\nOperators for Getting Post Replies/Comments​\n\nSprinklr provides its user edge by giving them power to use Keywords List inside advanced query along with Operators mentioned.\n\nCreate query using Topic query operators\n\nFollowing are some most used operator examples and their results –\n\nOperator\n\nExample\n\nResult\n\nhello\n\nSearch for the term \"hello\"\n\nsocial sprinklr\n\nSearch for the phrases \"social\" and \"sprinklr\"\n\n​\n\nNote: Using this will show preview but topic can not be saved as it will show error, Use \"Social Sprinklr\" or (Social AND/OR/ NOT/ NEAR Sprinklr) to eliminate error.\n\nAND\n\nsocial AND sprinklr\n\nSearch for \"social\" and \"sprinklr\" anywhere within the complete message, irrespective of keywords between them\n\nOR\n\nsocial OR sprinklr\n\nSearch for \"social\" or \"sprinklr\"\n\nNOT\n\n\"social media\" NOT \"facebook\"\n\nSearch for results that contain \"social media\" but not \"facebook\"\n\n~\n\n\"social media\"~10\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNEAR\n\nsocial NEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNote: This operator can be used with keyword lists.\n\nONEAR\n\nsocial ONEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other in an ordered way\n\nNote: This operator searches social ahead of media.\n\ntitle\n\ntitle: (\"social media\")\n\nSearch for social media in the title of the message\n\nNote: It is mostly used for News, blogs, reviews and other sites.\n\nauthor\n\nauthor: \"social_media\"\n\nFetches all the mentions from author name: social_media\n\nSome other operators which are supported by Sprinklr are –\n\nProximity: It is used to define proximity or distance between 2 keywords only, whereas, NEAR can be used to define proximity between two keywords as well as keyword lists.\n\nOnear (Ordered Near): It sets the order in which the keywords will appear. For example, Keyword-List1 ONEAR/10 Keyword-List2 will ensure keywords from Keyword-List1 appear first and then Keyword-List2 keywords will follow within space of maximum 10.\n\nStep by step guide to make advanced Topic query\n\nUse case\n\nTo write query fetching mentions of ZARA –\n\n​\n\n(# listening is used for instagram listening)\n\nGetting mention along with clothing or fashion related terms only –\n\nRemoving profanity from mention (use case specific) –\n\nRemoving profanity from mention (use case specific) –\n\nAs social media has lots of profane words you can also remove it by making a keyword list and negating it from query –\n\nFiltering Mentions in English –\n\n​\n\nApplying source input as Twitter –\n\nGetting mentions of those users which have followers between 100 to 1000 –\n\n​\n\nAdvanced example showcasing use of Topic query operators and keyword list –\n\nBest practices while using Advanced Query\n\nUse of Parentheses\n\n​Parentheses are not necessary to enclose a search query but can be useful while grouping operations together for more complex queries.\n\n​\n\nFor example, if you want to return results that mention Samsung or Apple phones, and also want to query content that mentions phones along with either Apple or Samsung, you could use parentheses around Apple and Samsung to group three keywords together, as shown below –\n\nphone AND (Apple OR Samsung)\n\n​\n\nUse of parentheses within brackets, is further explained below with an example –\n\n[(internet of things ~3) OR iot OR internetofthings) AND (robots OR robot OR #robot)] NOT [things]\n\nTip: You can also use parentheses within brackets to set off additional operations within the Advanced Query field. The end result should look similar to the result summary of a basic query, built using multiple operations within a single section.\n\n\nAs a part of the rest of the query, this will perform the following operations –\n\nSearch for posts that contain the phrase \"internet of things\" or \"#internetofthings\"\n\nFrom within those results, keep any result that also says \"robots\" or \"robot\" or \"#robot\" within three words (a proximity search) of either \"internet of things\" or \"iot\" or \"internetofthings\".\n\nDiscard any results that just have the phrase \"things\" within.\n\nParentheses nested within brackets intend to set off different operations as isolated processes. In the previous example, if you build an Advanced Query that states [(internet of things OR iot OR internet of things) AND (robots OR robot OR #robot)] your query will return results that contain ANY of the first three terms and the second three terms.\n\nHowever, if you build an Advanced Query that states [internet of things OR iot OR internet of things AND robots OR robot OR #robot], your query will return any result that contains the phrase \"internet of things\" or the word \"iot\" or the word \"robot\" or the hashtag #robot or specifically the phrase \"internet of things\" within the same message as the word \"robots\".\n\nNote:\n\nYou cannot use a \"NOT\" statement with an \"OR\" statement.\n\n\nExample:\n( social OR NOT media ) ❌\n( social NOT media ) ✅\n\n(( social OR ( media NOT facebook )) ✅\n\nWhy?\n\nQuery should not contain \"NOT\" terms in \"OR\" with other terms, \"NOT\" clauses should be used in \"AND\" with other terms, using \"NOT\" in \"OR\" will bring too much data.\n\nUse of Quotation marks\n\nQuotation marks can be used for phrases in which you are looking for an exact match of those particular words in a specific order. Using parentheses or quotation marks for single-word queries is not mandatory.\n\nUse straight quotation marks ( \" \" ) for outlining phrases within it. The use of curved quotation marks (“ ”) will not produce your desired results.\n\nParentheses are generally used to group keywords or phrases joined by one or more operators together, but with other keywords involved, parentheses and quotations would act differently. For example –\n\nVersion 1: \"Phil Schiller\" AND \"Apple Marketing\" will return results for content with the exact phrase Phil Schiller (or phil schiller) and the exact phrase Apple Marketing (or apple marketing).\n\nNote: Here exact does not mean case sensitive as in the case of exactMessage Operator.\n\nExample: exactMessage: (\"Phil Schiller\" AND \"Apple Marketing\"), which will fetch results for phrase Phil Schiller (not phil schiller) and the exact phrase Apple Marketing (not apple marketing).\n\n\nVersion 2: \"Phil Schiller\" AND (Apple OR Marketing) will return results for content with the phrase \"Phil Schiller\" (together) and at least one of the words, Apple or Marketing.\n\nHandling for Broad \u0026 Ambiguous Keywords\n\nIt is very important to not use/reduce use of broad keywords in advanced queries. Broader keywords will fetch mentions that are unrelated to topic of interest, and eventually hinder dashboard/insights\n\nFor all keywords used in an advanced topic query, ensure they are directly related to the topic of interest.\n\nIn case keywords are broad but relevant to topic, they should be tied to some relevant keywords related to that topic, by using NEAR Operators\n\nExample: Robot is an important keyword for Robot Company. However just using this keyword will fetch irrelevant keywords as it’s a broad keyword used for other entities as well (Robot Street, etc).\n\nInstead of using just Robot keyword, we should use: Robot NEAR/4 (Technology OR “machine” OR # tech OR IOT OR “Internet of things” ….)\n\nNote how keywords related to Robot are used with NEAR Operator. Related keywords could be related entities, industry keywords, parent company, country keywords, etc.\n\nFrequently asked questions\n\n​\n\nIs it compulsory to put quotation marks around phrases like \"apple music\" or can we use apple music directly?\n\nHow can I eliminate posts with many spam #’s or @’s?\n\nCan exact match or parent operators be used in advanced query?\n\nWhy am I able to see mentions in preview during making of topic but not in dashboard?\n\nDuring listening to @ mentions a lot of spam mentions are also getting tagged along, e.g. like wanting to get mentions of @tom but messages of @tom_fan56 are also coming. How to remove these irrelevant mentions?\n\nIf I write query as “tom” will it also fetch mentions such as tom_jerry / @tom / #tom ?\n\n​",
    "link": "https://www.sprinklr.com/help/articles/faqs-and-advanced-usecases/create-an-advanced-topic-query/646331628ea3c9635cf36711",
    "snippet": "Advanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more. By ...",
    "title": "‎Create an Advanced Topic Query | Sprinklr Help Center"
  },
  {
    "content_readable": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL). To learn about the query language used by Resource Graph, start with the tutorial for KQL.\n\nThis article covers the language components supported by Resource Graph:\n\nUnderstanding the Azure Resource Graph query language\n\nResource Graph tables\nExtended properties\nResource Graph custom language elements\n\nShared query syntax (preview)\nSupported KQL language elements\n\nSupported tabular/top level operators\nQuery scope\nEscape characters\nNext steps\n\nResource Graph tables\n\nResource Graph provides several tables for the data it stores about Azure Resource Manager resource types and their properties. Resource Graph tables can be used with the join operator to get properties from related resource types.\n\nResource Graph tables support the join flavors:\n\ninnerunique\ninner\nleftouter\nfullouter\n\nResource Graph table Can join other tables? Description\nAdvisorResources Yes Includes resources related to Microsoft.Advisor.\nAlertsManagementResources Yes Includes resources related to Microsoft.AlertsManagement.\nAppServiceResources Yes Includes resources related to Microsoft.Web.\nAuthorizationResources Yes Includes resources related to Microsoft.Authorization.\nAWSResources Yes Includes resources related to Microsoft.AwsConnector.\nAzureBusinessContinuityResources Yes Includes resources related to Microsoft.AzureBusinessContinuity.\nChaosResources Yes Includes resources related to Microsoft.Chaos.\nCommunityGalleryResources Yes Includes resources related to Microsoft.Compute.\nComputeResources Yes Includes resources related to Microsoft.Compute Virtual Machine Scale Sets.\nDesktopVirtualizationResources Yes Includes resources related to Microsoft.DesktopVirtualization.\nDnsResources Yes Includes resources related to Microsoft.Network.\nEdgeOrderResources Yes Includes resources related to Microsoft.EdgeOrder.\nElasticsanResources Yes Includes resources related to Microsoft.ElasticSan.\nExtendedLocationResources Yes Includes resources related to Microsoft.ExtendedLocation.\nFeatureResources Yes Includes resources related to Microsoft.Features.\nGuestConfigurationResources Yes Includes resources related to Microsoft.GuestConfiguration.\nHealthResourceChanges Yes Includes resources related to Microsoft.Resources.\nHealthResources Yes Includes resources related to Microsoft.ResourceHealth.\nInsightsResources Yes Includes resources related to Microsoft.Insights.\nIoTSecurityResources Yes Includes resources related to Microsoft.IoTSecurity and Microsoft.IoTFirmwareDefense.\nKubernetesConfigurationResources Yes Includes resources related to Microsoft.KubernetesConfiguration.\nKustoResources Yes Includes resources related to Microsoft.Kusto.\nMaintenanceResources Yes Includes resources related to Microsoft.Maintenance.\nManagedServicesResources Yes Includes resources related to Microsoft.ManagedServices.\nMigrateResources Yes Includes resources related to Microsoft.OffAzure.\nNetworkResources Yes Includes resources related to Microsoft.Network.\nPatchAssessmentResources Yes Includes resources related to Azure Virtual Machines patch assessment Microsoft.Compute and Microsoft.HybridCompute.\nPatchInstallationResources Yes Includes resources related to Azure Virtual Machines patch installation Microsoft.Compute and Microsoft.HybridCompute.\nPolicyResources Yes Includes resources related to Microsoft.PolicyInsights.\nRecoveryServicesResources Yes Includes resources related to Microsoft.DataProtection and Microsoft.RecoveryServices.\nResourceChanges Yes Includes resources related to Microsoft.Resources.\nResourceContainerChanges Yes Includes resources related to Microsoft.Resources.\nResourceContainers Yes Includes management group (Microsoft.Management/managementGroups), subscription (Microsoft.Resources/subscriptions) and resource group (Microsoft.Resources/subscriptions/resourcegroups) resource types and data.\nResources Yes The default table if a table isn't defined in the query. Most Resource Manager resource types and properties are here.\nSecurityResources Yes Includes resources related to Microsoft.Security.\nServiceFabricResources Yes Includes resources related to Microsoft.ServiceFabric.\nServiceHealthResources Yes Includes resources related to Microsoft.ResourceHealth/events.\nSpotResources Yes Includes resources related to Microsoft.Compute.\nSupportResources Yes Includes resources related to Microsoft.Support.\nTagsResources Yes Includes resources related to Microsoft.Resources/tagnamespaces.\n\nFor a list of tables that includes resource types, go to Azure Resource Graph table and resource type reference.\n\nNote\n\nResources is the default table. While querying the Resources table, it isn't required to provide the table name unless join or union are used. But the recommended practice is to always include the initial table in the query.\n\nTo discover which resource types are available in each table, use Resource Graph Explorer in the portal. As an alternative, use a query such as \u003ctableName\u003e | distinct type to get a list of resource types the given Resource Graph table supports that exist in your environment.\n\nThe following query shows a simple join. The query result blends the columns together and any duplicate column names from the joined table, ResourceContainers in this example, are appended with 1. As ResourceContainers table has types for both subscriptions and resource groups, either type might be used to join to the resource from Resources table.\n\nResources\n| join ResourceContainers on subscriptionId\n| limit 1\n\n\nThe following query shows a more complex use of join. First, the query uses project to get the fields from Resources for the Azure Key Vault vaults resource type. The next step uses join to merge the results with ResourceContainers where the type is a subscription on a property that is both in the first table's project and the joined table's project. The field rename avoids join adding it as name1 since the property already is projected from Resources. The query result is a single key vault displaying type, the name, location, and resource group of the key vault, along with the name of the subscription it's in.\n\nResources\n| where type == 'microsoft.keyvault/vaults'\n| project name, type, location, subscriptionId, resourceGroup\n| join (ResourceContainers | where type=='microsoft.resources/subscriptions' | project SubName=name, subscriptionId) on subscriptionId\n| project type, name, location, resourceGroup, SubName\n| limit 1\n\n\nNote\n\nWhen limiting the join results with project, the property used by join to relate the two tables, subscriptionId in the above example, must be included in project.\n\nExtended properties\n\nAs a preview feature, some of the resource types in Resource Graph have more type-related properties available to query beyond the properties provided by Azure Resource Manager. This set of values, known as extended properties, exists on a supported resource type in properties.extended. To show resource types with extended properties, use the following query:\n\nResources\n| where isnotnull(properties.extended)\n| distinct type\n| order by type asc\n\n\nExample: Get count of virtual machines by instanceView.powerState.code:\n\nResources\n| where type == 'microsoft.compute/virtualmachines'\n| summarize count() by tostring(properties.extended.instanceView.powerState.code)\n\n\nResource Graph custom language elements\n\nShared query syntax (preview)\n\nAs a preview feature, a shared query can be accessed directly in a Resource Graph query. This scenario makes it possible to create standard queries as shared queries and reuse them. To call a shared query inside a Resource Graph query, use the {{shared-query-uri}} syntax. The URI of the shared query is the Resource ID of the shared query on the Settings page for that query. In this example, our shared query URI is /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS. This URI points to the subscription, resource group, and full name of the shared query we want to reference in another query. This query is the same as the one created in Tutorial: Create and share a query.\n\nNote\n\nYou can't save a query that references a shared query as a shared query.\n\nExample 1: Use only the shared query:\n\nThe results of this Resource Graph query are the same as the query stored in the shared query.\n\n{{/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS}}\n\n\nExample 2: Include the shared query as part of a larger query:\n\nThis query first uses the shared query, and then uses limit to further restrict the results.\n\n{{/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS}}\n| where properties_storageProfile_osDisk_osType =~ 'Windows'\n\n\nSupported KQL language elements\n\nResource Graph supports a subset of KQL data types, scalar functions, scalar operators, and aggregation functions. Specific tabular operators are supported by Resource Graph, some of which have different behaviors.\n\nSupported tabular/top level operators\n\nHere's the list of KQL tabular operators supported by Resource Graph with specific samples:\n\nKQL Resource Graph sample query Notes\ncount Count key vaults\ndistinct Show resources that contain storage\nextend Count virtual machines by OS type\njoin Key vault with subscription name Join flavors supported: innerunique, inner, leftouter, and fullouter. Limit of three join or union operations (or a combination of the two) in a single query, counted together, one of which might be a cross-table join. If all cross-table join use is between Resource and ResourceContainers, then three cross-table join are allowed. Custom join strategies, such as broadcast join, aren't allowed. For which tables can use join, go to Resource Graph tables.\nlimit List all public IP addresses Synonym of take. Doesn't work with Skip.\nmvexpand Legacy operator, use mv-expand instead. RowLimit max of 2,000. The default is 128.\nmv-expand List Azure Cosmos DB with specific write locations RowLimit max of 2,000. The default is 128. Limit of 3 mv-expand in a single query.\norder List resources sorted by name Synonym of sort\nparse Get virtual networks and subnets of network interfaces It's optimal to access properties directly if they exist instead of using parse.\nproject List resources sorted by name\nproject-away Remove columns from results\nsort List resources sorted by name Synonym of order\nsummarize Count Azure resources Simplified first page only\ntake List all public IP addresses Synonym of limit. Doesn't work with Skip.\ntop Show first five virtual machines by name and their OS type\nunion Combine results from two queries into a single result Single table allowed: | union [kind= inner|outer] [withsource=ColumnName] Table. Limit of three union legs in a single query. Fuzzy resolution of union leg tables isn't allowed. Might be used within a single table or between the Resources and ResourceContainers tables.\nwhere Show resources that contain storage\n\nThere's a default limit of three join and three mv-expand operators in a single Resource Graph SDK query. You can request an increase in these limits for your tenant through Help + support.\n\nTo support the Open Query portal experience, Azure Resource Graph Explorer has a higher global limit than Resource Graph SDK.\n\nNote\n\nYou can't reference a table as right table multiple times, which exceeds the limit of 1. If you do so, you would receive an error with code DisallowedMaxNumberOfRemoteTables.\n\nQuery scope\n\nThe scope of the subscriptions or management groups from which resources are returned by a query defaults to a list of subscriptions based on the context of the authorized user. If a management group or a subscription list isn't defined, the query scope is all resources, and includes Azure Lighthouse delegated resources.\n\nThe list of subscriptions or management groups to query can be manually defined to change the scope of the results. For example, the REST API managementGroups property takes the management group ID, which is different from the name of the management group. When managementGroups is specified, resources from the first 10,000 subscriptions in or under the specified management group hierarchy are included. managementGroups can't be used at the same time as subscriptions.\n\nExample: Query all resources within the hierarchy of the management group named My Management Group with ID myMG.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-03-01\n\n\nRequest Body\n\n{\n  \"query\": \"Resources | summarize count()\",\n  \"managementGroups\": [\"myMG\"]\n}\n\n\nThe AuthorizationScopeFilter parameter enables you to list Azure Policy assignments and Azure role-based access control (Azure RBAC) role assignments in the AuthorizationResources table that are inherited from upper scopes. The AuthorizationScopeFilter parameter accepts the following values for the PolicyResources and AuthorizationResources tables:\n\nAtScopeAndBelow (default if not specified): Returns assignments for the given scope and all child scopes.\nAtScopeAndAbove: Returns assignments for the given scope and all parent scopes, but not child scopes.\nAtScopeAboveAndBelow: Returns assignments for the given scope, all parent scopes, and all child scopes.\nAtScopeExact: Returns assignments only for the given scope; no parent or child scopes are included.\n\nNote\n\nTo use the AuthorizationScopeFilter parameter, be sure to use the 2021-06-01-preview or later API version in your requests.\n\nExample: Get all policy assignments at the myMG management group and Tenant Root (parent) scopes.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-06-01-preview\n\n\nRequest Body Sample\n\n{\n  \"options\": {\n    \"authorizationScopeFilter\": \"AtScopeAndAbove\"\n  },\n  \"query\": \"PolicyResources | where type =~ 'Microsoft.Authorization/PolicyAssignments'\",\n  \"managementGroups\": [\"myMG\"]\n}\n\n\nExample: Get all policy assignments at the mySubscriptionId subscription, management group, and Tenant Root scopes.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-06-01-preview\n\n\nRequest Body Sample\n\n{\n  \"options\": {\n    \"authorizationScopeFilter\": \"AtScopeAndAbove\"\n  },\n  \"query\": \"PolicyResources | where type =~ 'Microsoft.Authorization/PolicyAssignments'\",\n  \"subscriptions\": [\"mySubscriptionId\"]\n}\n\n\nEscape characters\n\nSome property names, such as those that include a . or $, must be wrapped or escaped in the query or the property name is interpreted incorrectly and doesn't provide the expected results.\n\nDot (.): Wrap the property name ['propertyname.withaperiod'] using brackets.\n\nExample query that wraps the property odata.type:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.['odata.type']\n\n\nDollar sign ($): Escape the character in the property name. The escape character used depends on the shell that runs Resource Graph.\n\nBash: Use a backslash (\\) as the escape character.\n\nExample query that escapes the property $type in Bash:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.\\$type\n\n\ncmd: Don't escape the dollar sign ($) character.\n\nPowerShell: Use a backtick (`) as the escape character.\n\nExample query that escapes the property $type in PowerShell:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.`$type\n\n\nNext steps\n\nAzure Resource Graph query language Starter queries and Advanced queries.\nLearn more about how to explore Azure resources.",
    "link": "https://learn.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language",
    "snippet": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL).",
    "title": "Understanding the Azure Resource Graph query language - Microsoft"
  }
]
Output (4405 bytes)
[
  {
    "text": "# Headline\n\n**Short summary:** X (formerly Twitter) has moved its API to a credit-based, pay‑per‑use model (v2) with per‑endpoint pricing, real‑time usage tracking, deduplication (24‑hour window), and a separate enterprise channel for high volume needs.\n\n\n1) Most relevant sources\n\n1. **Pricing - X - X Developer Platform**\n   - https://docs.x.com/x-api/getting-started/pricing\n   - Synthesis: Official docs describe a credit‑based, pay‑per‑usage billing model (purchase credits in the Developer Console, different endpoints cost different amounts), real‑time tracking, 24‑hour deduplication of resources, auto‑recharge and spending limits, plus free xAI credits (tiered rebate up to 20%); pay‑per‑use plans have a 2M Post reads cap and enterprises must contact sales for higher volume.\n\n2. **About the X API**\n   - https://docs.x.com/x-api/getting-started/about-x-api\n   - Synthesis: X API v2 is the recommended, modern API (JSON, flexible fields/expansions, annotations, edit history) and uses the pay‑per‑usage pricing model while documenting endpoints, rate limits, monthly post quotas, and migration notes from legacy v1.1.\n\n3. **Announcing the Launch of X API Pay‑Per‑Use Pricing (X DevCommunity)**\n   - https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476\n   - Synthesis: Official community announcement (DevCommunity) confirms the launch of the pay‑per‑use model as an intentional move to give developers granular, consumption‑based options — the thread is the place for rollout updates and community Q\u0026A.\n\n4. **X Tests Pay‑Per‑Use API Model to Win Back Developers (TechBuzz)**\n   - https://www.techbuzz.ai/articles/x-tests-pay-per-use-api-model-to-win-back-developers\n   - Synthesis: Coverage notes X’s new API cost calculator and frames pay‑per‑use as a transparency effort intended to make access more predictable and attractive compared with earlier, all‑or‑nothing tier changes.\n\n5. **Twitter API Cost Calculator — GetXAPI**\n   - https://www.getxapi.com/twitter-api-cost-calculator\n   - Synthesis: Third‑party calculator comparing per‑call costs (claims official X read/write rates vs alternatives) — useful for quick cost‑estimates but relies on input assumptions, so verify against official Developer Console prices.\n\n6. **How to Get X API Key: Complete 2026 Guide to Pricing ... (Elfsight)**\n   - https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/\n   - Synthesis: Practical guide summarizing 2026 state: evolution from free → expensive tiers → pay‑per‑use beta, details on auth (OAuth2 recommended), rate limits, monthly quotas, and five optimization strategies to reduce consumption and cost (field selection, caching, batching, backoff, filtered streams).\n\n7. **Twitter's Pay‑Per‑Use API: Could This Finally Kill the Scraping Economy (ScrapeCreators)**\n   - https://scrapecreators.com/blog/twitter-s-pay-per-use-api-could-this-finally-kill-the-scraping-economy\n   - Synthesis: Industry analysis arguing that an affordable pay‑per‑use API could reduce incentive to scrape, but the model must be transparent, granular, and competitively priced to shift long‑term behavior.\n\n\nFinal takeaway \u0026 suggested next steps\n\nTakeaway: X is transitioning to a credit‑based, pay‑per‑use API (v2) that emphasizes per‑endpoint pricing, usage visibility, and deduplication to reduce duplicate charges — enterprise options remain for very high volume. Actual costs depend heavily on which endpoints you use and your rate/volume patterns, so estimates from third‑party tools can help but should be validated against the Developer Console.\n\nNext steps: 1) Open your Developer Console to view exact per‑endpoint rates and enable real‑time usage monitoring; 2) run a cost estimate for your expected monthly requests (use the official calculator or a controlled test with a small credit purchase/voucher); 3) implement the documented optimization strategies (field selection, caching, batching, filtered stream) to reduce consumption; and 4) follow the X DevCommunity thread and official docs for updates on beta/enterprise limits.\n\nSuggested follow‑up searches: \"X API per‑endpoint pricing rates\", \"X API deduplication rules details\", and \"X Developer Console cost calculator / vouchers.\""
  }
]
s5 slack success 2026-03-01 22:56:25 → 2026-03-01 22:56:25
Input (4405 bytes)
[
  {
    "text": "# Headline\n\n**Short summary:** X (formerly Twitter) has moved its API to a credit-based, pay‑per‑use model (v2) with per‑endpoint pricing, real‑time usage tracking, deduplication (24‑hour window), and a separate enterprise channel for high volume needs.\n\n\n1) Most relevant sources\n\n1. **Pricing - X - X Developer Platform**\n   - https://docs.x.com/x-api/getting-started/pricing\n   - Synthesis: Official docs describe a credit‑based, pay‑per‑usage billing model (purchase credits in the Developer Console, different endpoints cost different amounts), real‑time tracking, 24‑hour deduplication of resources, auto‑recharge and spending limits, plus free xAI credits (tiered rebate up to 20%); pay‑per‑use plans have a 2M Post reads cap and enterprises must contact sales for higher volume.\n\n2. **About the X API**\n   - https://docs.x.com/x-api/getting-started/about-x-api\n   - Synthesis: X API v2 is the recommended, modern API (JSON, flexible fields/expansions, annotations, edit history) and uses the pay‑per‑usage pricing model while documenting endpoints, rate limits, monthly post quotas, and migration notes from legacy v1.1.\n\n3. **Announcing the Launch of X API Pay‑Per‑Use Pricing (X DevCommunity)**\n   - https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476\n   - Synthesis: Official community announcement (DevCommunity) confirms the launch of the pay‑per‑use model as an intentional move to give developers granular, consumption‑based options — the thread is the place for rollout updates and community Q\u0026A.\n\n4. **X Tests Pay‑Per‑Use API Model to Win Back Developers (TechBuzz)**\n   - https://www.techbuzz.ai/articles/x-tests-pay-per-use-api-model-to-win-back-developers\n   - Synthesis: Coverage notes X’s new API cost calculator and frames pay‑per‑use as a transparency effort intended to make access more predictable and attractive compared with earlier, all‑or‑nothing tier changes.\n\n5. **Twitter API Cost Calculator — GetXAPI**\n   - https://www.getxapi.com/twitter-api-cost-calculator\n   - Synthesis: Third‑party calculator comparing per‑call costs (claims official X read/write rates vs alternatives) — useful for quick cost‑estimates but relies on input assumptions, so verify against official Developer Console prices.\n\n6. **How to Get X API Key: Complete 2026 Guide to Pricing ... (Elfsight)**\n   - https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/\n   - Synthesis: Practical guide summarizing 2026 state: evolution from free → expensive tiers → pay‑per‑use beta, details on auth (OAuth2 recommended), rate limits, monthly quotas, and five optimization strategies to reduce consumption and cost (field selection, caching, batching, backoff, filtered streams).\n\n7. **Twitter's Pay‑Per‑Use API: Could This Finally Kill the Scraping Economy (ScrapeCreators)**\n   - https://scrapecreators.com/blog/twitter-s-pay-per-use-api-could-this-finally-kill-the-scraping-economy\n   - Synthesis: Industry analysis arguing that an affordable pay‑per‑use API could reduce incentive to scrape, but the model must be transparent, granular, and competitively priced to shift long‑term behavior.\n\n\nFinal takeaway \u0026 suggested next steps\n\nTakeaway: X is transitioning to a credit‑based, pay‑per‑use API (v2) that emphasizes per‑endpoint pricing, usage visibility, and deduplication to reduce duplicate charges — enterprise options remain for very high volume. Actual costs depend heavily on which endpoints you use and your rate/volume patterns, so estimates from third‑party tools can help but should be validated against the Developer Console.\n\nNext steps: 1) Open your Developer Console to view exact per‑endpoint rates and enable real‑time usage monitoring; 2) run a cost estimate for your expected monthly requests (use the official calculator or a controlled test with a small credit purchase/voucher); 3) implement the documented optimization strategies (field selection, caching, batching, filtered stream) to reduce consumption; and 4) follow the X DevCommunity thread and official docs for updates on beta/enterprise limits.\n\nSuggested follow‑up searches: \"X API per‑endpoint pricing rates\", \"X API deduplication rules details\", and \"X Developer Console cost calculator / vouchers.\""
  }
]
Output (4399 bytes)
[
  {
    "ok": true,
    "text": "*Headline*\n\n*Short summary:* X (formerly Twitter) has moved its API to a credit-based, pay‑per‑use model (v2) with per‑endpoint pricing, real‑time usage tracking, deduplication (24‑hour window), and a separate enterprise channel for high volume needs.\n\n\n1) Most relevant sources\n\n1. *Pricing - X - X Developer Platform*\n   - https://docs.x.com/x-api/getting-started/pricing\n   - Synthesis: Official docs describe a credit‑based, pay‑per‑usage billing model (purchase credits in the Developer Console, different endpoints cost different amounts), real‑time tracking, 24‑hour deduplication of resources, auto‑recharge and spending limits, plus free xAI credits (tiered rebate up to 20%); pay‑per‑use plans have a 2M Post reads cap and enterprises must contact sales for higher volume.\n\n2. *About the X API*\n   - https://docs.x.com/x-api/getting-started/about-x-api\n   - Synthesis: X API v2 is the recommended, modern API (JSON, flexible fields/expansions, annotations, edit history) and uses the pay‑per‑usage pricing model while documenting endpoints, rate limits, monthly post quotas, and migration notes from legacy v1.1.\n\n3. *Announcing the Launch of X API Pay‑Per‑Use Pricing (X DevCommunity)*\n   - https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476\n   - Synthesis: Official community announcement (DevCommunity) confirms the launch of the pay‑per‑use model as an intentional move to give developers granular, consumption‑based options — the thread is the place for rollout updates and community Q\u0026A.\n\n4. *X Tests Pay‑Per‑Use API Model to Win Back Developers (TechBuzz)*\n   - https://www.techbuzz.ai/articles/x-tests-pay-per-use-api-model-to-win-back-developers\n   - Synthesis: Coverage notes X’s new API cost calculator and frames pay‑per‑use as a transparency effort intended to make access more predictable and attractive compared with earlier, all‑or‑nothing tier changes.\n\n5. *Twitter API Cost Calculator — GetXAPI*\n   - https://www.getxapi.com/twitter-api-cost-calculator\n   - Synthesis: Third‑party calculator comparing per‑call costs (claims official X read/write rates vs alternatives) — useful for quick cost‑estimates but relies on input assumptions, so verify against official Developer Console prices.\n\n6. *How to Get X API Key: Complete 2026 Guide to Pricing ... (Elfsight)*\n   - https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/\n   - Synthesis: Practical guide summarizing 2026 state: evolution from free → expensive tiers → pay‑per‑use beta, details on auth (OAuth2 recommended), rate limits, monthly quotas, and five optimization strategies to reduce consumption and cost (field selection, caching, batching, backoff, filtered streams).\n\n7. *Twitter's Pay‑Per‑Use API: Could This Finally Kill the Scraping Economy (ScrapeCreators)*\n   - https://scrapecreators.com/blog/twitter-s-pay-per-use-api-could-this-finally-kill-the-scraping-economy\n   - Synthesis: Industry analysis arguing that an affordable pay‑per‑use API could reduce incentive to scrape, but the model must be transparent, granular, and competitively priced to shift long‑term behavior.\n\n\nFinal takeaway \u0026 suggested next steps\n\nTakeaway: X is transitioning to a credit‑based, pay‑per‑use API (v2) that emphasizes per‑endpoint pricing, usage visibility, and deduplication to reduce duplicate charges — enterprise options remain for very high volume. Actual costs depend heavily on which endpoints you use and your rate/volume patterns, so estimates from third‑party tools can help but should be validated against the Developer Console.\n\nNext steps: 1) Open your Developer Console to view exact per‑endpoint rates and enable real‑time usage monitoring; 2) run a cost estimate for your expected monthly requests (use the official calculator or a controlled test with a small credit purchase/voucher); 3) implement the documented optimization strategies (field selection, caching, batching, filtered stream) to reduce consumption; and 4) follow the X DevCommunity thread and official docs for updates on beta/enterprise limits.\n\nSuggested follow‑up searches: \"X API per‑endpoint pricing rates\", \"X API deduplication rules details\", and \"X Developer Console cost calculator / vouchers.\""
  }
]