← research-and-report

Run Detail

50b411f8-d5c2-4c4a-87ca-bd1e3b7ea0c2
success

Started

2026-03-01 22:53:19

Finished

2026-03-01 22:54:07

Steps

s1 web_search success 2026-03-01 22:53:19 → 2026-03-01 22:53:20
Input (38 bytes)
[
  {
    "query": "twitter api pricing 2026"
  }
]
Output (2464 bytes)
[
  {
    "link": "https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476",
    "snippet": "Hello X Developers, We're thrilled to officially announce the launch of our new X API Pay-Per-Use pricing model ... February 13, 2026. Announcing ...",
    "title": "Announcing the Launch of X API Pay-Per-Use Pricing"
  },
  {
    "link": "https://devcommunity.x.com/t/want-to-understand-the-pricing/256677",
    "snippet": "So the cost should be: At max, $6x0.01 + $4x0.01x2 = $0.14 (two because they each API request is capped at 1000 entries) right? But why is every ...",
    "title": "Want to understand the pricing - X API v2 - X Developer Community"
  },
  {
    "link": "https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/",
    "snippet": "For production applications, the Basic tier ($200/month) is the practical minimum. What's the difference between OAuth 2.0 and Bearer tokens?",
    "title": "How to Get X API Key: Complete 2026 Guide to Pricing ... - Elfsight"
  },
  {
    "link": "https://www.wearefounders.uk/the-x-api-price-hike-a-blow-to-indie-hackers/",
    "snippet": "Current X API Pricing Tiers (2026) ; Free, $0, $0 ; Basic, $200, $2,100 (save 12.5%) ; Pro, $5,000, $54,000 (save 10%) ; Enterprise, $42,000+/month ...",
    "title": "X API Pricing in 2026: Every Tier Explained (And the New Pay-As ..."
  },
  {
    "link": "https://devcommunity.x.com/",
    "snippet": "Hello X Developers, We're thrilled to officially announce the launch of our new X API Pay-Per-Use pricing model! This update is designed to empower the heart of ...",
    "title": "X Developers - Twitter"
  },
  {
    "link": "https://netrows.com/blog/top-twitter-x-data-api-providers-2026",
    "snippet": "3. RapidAPI Twitter APIs ; Best For: Quick prototyping and testing ; Starting Price: Varies by provider ($0-$500/month) ; Free Tier: Limited ...",
    "title": "Top Twitter/X Data API Providers Compared (2026) - Netrows"
  },
  {
    "link": "https://devcommunity.x.com/t/announcing-the-x-api-pay-per-use-pricing-pilot/250253",
    "snippet": "Pricing Details ; Post (Read): $0.005 per Post fetched. ; User (Read): $0.01 per User fetched. ; DM Event (Read): $0.01 per DM Event fetched.",
    "title": "Announcing the X API Pay-Per-Use Pricing Pilot"
  },
  {
    "link": "https://cbconnect-api-dev.resultsathand.com/tech-signal/twitter-api-cost-is-access-free-or-paid-1764797574",
    "snippet": "Let's break it down in a way that's easy to understand. The short answer is that it depends on your usage and the level of access you need.",
    "title": "Twitter API Cost: Is Access Free Or Paid? - Resultsathand"
  }
]
s2 web_search success 2026-03-01 22:53:20 → 2026-03-01 22:53:21
Input (2464 bytes)
[
  {
    "link": "https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476",
    "snippet": "Hello X Developers, We're thrilled to officially announce the launch of our new X API Pay-Per-Use pricing model ... February 13, 2026. Announcing ...",
    "title": "Announcing the Launch of X API Pay-Per-Use Pricing"
  },
  {
    "link": "https://devcommunity.x.com/t/want-to-understand-the-pricing/256677",
    "snippet": "So the cost should be: At max, $6x0.01 + $4x0.01x2 = $0.14 (two because they each API request is capped at 1000 entries) right? But why is every ...",
    "title": "Want to understand the pricing - X API v2 - X Developer Community"
  },
  {
    "link": "https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/",
    "snippet": "For production applications, the Basic tier ($200/month) is the practical minimum. What's the difference between OAuth 2.0 and Bearer tokens?",
    "title": "How to Get X API Key: Complete 2026 Guide to Pricing ... - Elfsight"
  },
  {
    "link": "https://www.wearefounders.uk/the-x-api-price-hike-a-blow-to-indie-hackers/",
    "snippet": "Current X API Pricing Tiers (2026) ; Free, $0, $0 ; Basic, $200, $2,100 (save 12.5%) ; Pro, $5,000, $54,000 (save 10%) ; Enterprise, $42,000+/month ...",
    "title": "X API Pricing in 2026: Every Tier Explained (And the New Pay-As ..."
  },
  {
    "link": "https://devcommunity.x.com/",
    "snippet": "Hello X Developers, We're thrilled to officially announce the launch of our new X API Pay-Per-Use pricing model! This update is designed to empower the heart of ...",
    "title": "X Developers - Twitter"
  },
  {
    "link": "https://netrows.com/blog/top-twitter-x-data-api-providers-2026",
    "snippet": "3. RapidAPI Twitter APIs ; Best For: Quick prototyping and testing ; Starting Price: Varies by provider ($0-$500/month) ; Free Tier: Limited ...",
    "title": "Top Twitter/X Data API Providers Compared (2026) - Netrows"
  },
  {
    "link": "https://devcommunity.x.com/t/announcing-the-x-api-pay-per-use-pricing-pilot/250253",
    "snippet": "Pricing Details ; Post (Read): $0.005 per Post fetched. ; User (Read): $0.01 per User fetched. ; DM Event (Read): $0.01 per DM Event fetched.",
    "title": "Announcing the X API Pay-Per-Use Pricing Pilot"
  },
  {
    "link": "https://cbconnect-api-dev.resultsathand.com/tech-signal/twitter-api-cost-is-access-free-or-paid-1764797574",
    "snippet": "Let's break it down in a way that's easy to understand. The short answer is that it depends on your usage and the level of access you need.",
    "title": "Twitter API Cost: Is Access Free Or Paid? - Resultsathand"
  }
]
Output (1588 bytes)
[
  {
    "link": "https://haystack.deepset.ai/blog/query-decomposition",
    "snippet": "This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach.",
    "title": "Advanced RAG: Query Decomposition \u0026 Reasoning - Haystack"
  },
  {
    "link": "https://www.jetbrains.com/help/youtrack/cloud/search-and-command-attributes.html",
    "snippet": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and ...",
    "title": "Search Query Reference | YouTrack Cloud Documentation - JetBrains"
  },
  {
    "link": "https://dev.to/j12y/query-github-repo-topics-using-graphql-35ha",
    "snippet": "Creating a customized user profile page for GitHub to showcase work projects and make navigation to relevant topics easier.",
    "title": "Query GitHub Repo Topics Using GraphQL - DEV Community"
  },
  {
    "link": "https://www.sprinklr.com/help/articles/faqs-and-advanced-usecases/create-an-advanced-topic-query/646331628ea3c9635cf36711",
    "snippet": "Advanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more. By ...",
    "title": "‎Create an Advanced Topic Query | Sprinklr Help Center"
  },
  {
    "link": "https://learn.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language",
    "snippet": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL).",
    "title": "Understanding the Azure Resource Graph query language - Microsoft"
  }
]
s3 fetch_content success 2026-03-01 22:53:21 → 2026-03-01 22:53:28
Input (4051 bytes)
[
  {
    "link": "https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476",
    "snippet": "Hello X Developers, We're thrilled to officially announce the launch of our new X API Pay-Per-Use pricing model ... February 13, 2026. Announcing ...",
    "title": "Announcing the Launch of X API Pay-Per-Use Pricing"
  },
  {
    "link": "https://devcommunity.x.com/t/want-to-understand-the-pricing/256677",
    "snippet": "So the cost should be: At max, $6x0.01 + $4x0.01x2 = $0.14 (two because they each API request is capped at 1000 entries) right? But why is every ...",
    "title": "Want to understand the pricing - X API v2 - X Developer Community"
  },
  {
    "link": "https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/",
    "snippet": "For production applications, the Basic tier ($200/month) is the practical minimum. What's the difference between OAuth 2.0 and Bearer tokens?",
    "title": "How to Get X API Key: Complete 2026 Guide to Pricing ... - Elfsight"
  },
  {
    "link": "https://www.wearefounders.uk/the-x-api-price-hike-a-blow-to-indie-hackers/",
    "snippet": "Current X API Pricing Tiers (2026) ; Free, $0, $0 ; Basic, $200, $2,100 (save 12.5%) ; Pro, $5,000, $54,000 (save 10%) ; Enterprise, $42,000+/month ...",
    "title": "X API Pricing in 2026: Every Tier Explained (And the New Pay-As ..."
  },
  {
    "link": "https://devcommunity.x.com/",
    "snippet": "Hello X Developers, We're thrilled to officially announce the launch of our new X API Pay-Per-Use pricing model! This update is designed to empower the heart of ...",
    "title": "X Developers - Twitter"
  },
  {
    "link": "https://netrows.com/blog/top-twitter-x-data-api-providers-2026",
    "snippet": "3. RapidAPI Twitter APIs ; Best For: Quick prototyping and testing ; Starting Price: Varies by provider ($0-$500/month) ; Free Tier: Limited ...",
    "title": "Top Twitter/X Data API Providers Compared (2026) - Netrows"
  },
  {
    "link": "https://devcommunity.x.com/t/announcing-the-x-api-pay-per-use-pricing-pilot/250253",
    "snippet": "Pricing Details ; Post (Read): $0.005 per Post fetched. ; User (Read): $0.01 per User fetched. ; DM Event (Read): $0.01 per DM Event fetched.",
    "title": "Announcing the X API Pay-Per-Use Pricing Pilot"
  },
  {
    "link": "https://cbconnect-api-dev.resultsathand.com/tech-signal/twitter-api-cost-is-access-free-or-paid-1764797574",
    "snippet": "Let's break it down in a way that's easy to understand. The short answer is that it depends on your usage and the level of access you need.",
    "title": "Twitter API Cost: Is Access Free Or Paid? - Resultsathand"
  },
  {
    "link": "https://haystack.deepset.ai/blog/query-decomposition",
    "snippet": "This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach.",
    "title": "Advanced RAG: Query Decomposition \u0026 Reasoning - Haystack"
  },
  {
    "link": "https://www.jetbrains.com/help/youtrack/cloud/search-and-command-attributes.html",
    "snippet": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and ...",
    "title": "Search Query Reference | YouTrack Cloud Documentation - JetBrains"
  },
  {
    "link": "https://dev.to/j12y/query-github-repo-topics-using-graphql-35ha",
    "snippet": "Creating a customized user profile page for GitHub to showcase work projects and make navigation to relevant topics easier.",
    "title": "Query GitHub Repo Topics Using GraphQL - DEV Community"
  },
  {
    "link": "https://www.sprinklr.com/help/articles/faqs-and-advanced-usecases/create-an-advanced-topic-query/646331628ea3c9635cf36711",
    "snippet": "Advanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more. By ...",
    "title": "‎Create an Advanced Topic Query | Sprinklr Help Center"
  },
  {
    "link": "https://learn.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language",
    "snippet": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL).",
    "title": "Understanding the Azure Resource Graph query language - Microsoft"
  }
]
Output (146258 bytes)
[
  {
    "content_readable": "Crawler is not allowed!",
    "link": "https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476",
    "snippet": "Hello X Developers, We're thrilled to officially announce the launch of our new X API Pay-Per-Use pricing model ... February 13, 2026. Announcing ...",
    "title": "Announcing the Launch of X API Pay-Per-Use Pricing"
  },
  {
    "content_readable": "Crawler is not allowed!",
    "link": "https://devcommunity.x.com/t/want-to-understand-the-pricing/256677",
    "snippet": "So the cost should be: At max, $6x0.01 + $4x0.01x2 = $0.14 (two because they each API request is capped at 1000 entries) right? But why is every ...",
    "title": "Want to understand the pricing - X API v2 - X Developer Community"
  },
  {
    "content_readable": "The X API pricing has dramatically changed since 2023 – free access is effectively gone. This complete guide covers authentication, rate limits, optimization strategies, and real-world use cases for building scalable X integrations with confidence.\n\n3 weeks ago\n\nThe X API (formerly Twitter API) has undergone dramatic changes since Elon Musk’s acquisition in 2023. What was once a free, developer-friendly platform is now a premium service with strict pricing tiers and carefully controlled access levels. For developers building bots, integrating real-time data, or creating social media management tools, understanding the current X API landscape is critical.\n\nThis comprehensive guide walks you through everything you need to know about obtaining X API credentials in 2026, understanding actual costs, and optimizing your implementation for efficiency.\n\nEssential concepts covered:\n\nHow X API pricing evolved from free to paid and the emerging pay-per-use model\nCurrent tiers breakdown and which tier fits your use case\nStep-by-step process to get your API credentials from the Developer Portal\nModern authentication methods and permission scopes\nFive proven optimization strategies to reduce costs and improve performance\n\nLet’s start by understanding where the X API fits into your development workflow and what’s currently available.\n\nThe X API Evolution: What Changed\n\nThe Twitter API has evolved dramatically over the years. Here’s the timeline of major changes:\n\nDate Event Impact on Developers\nOctober 2022 Elon Musk acquires Twitter Speculation about API changes begins\nFebruary 2023 Free API access eliminated Third-party clients (Tweetbot, Echofon) shut down; pricing becomes mandatory\nMarch 2023 Paid tiers introduced ($100, $2,500, $42,000) Entry price jumps 100x; developer ecosystem fragments\nJune 2024 Basic tier pricing doubles to $200/month Increased barrier to entry for indie developers\nOctober 2024 Official rebrand: Twitter → X All documentation and branding updated; confusing for legacy users\nNovember 2025 Pay-per-use pricing beta launches New consumption-based model with $500 developer vouchers for testing\n\nFree access became $200–$5,000/month in four years. Before planning an implementation, understand what the API actually provides and which tier matches your needs.\n\nWhat Can You Build With the X API?\n\nThe X API enables programmatic access to X’s infrastructure—from retrieving data to publishing content to automating responses. Here are the most common applications:\n\nBrand Monitoring \u0026 Social Intelligence\n\nTrack mentions, competitor activity, and trending conversations in real-time. Filtered streams deliver instant alerts when specific keywords or accounts generate activity, enabling teams to respond quickly to brand-relevant events.\n\nContent Scheduling\n\nAutomate posting schedules, manage multiple accounts from a single dashboard, and coordinate content workflows. Agencies and creators use these tools to handle dozens of X accounts without manual login-and-post cycles.\n\nWebsite Content Integration\n\nEmbed live X feeds, individual tweets, and trending topics directly into websites. Publishers keep content synchronized with live X activity without requiring manual updates or outdated embeds.\n\nData Analysis and Research\n\nAccess structured data for large-scale studies, trend analysis, and market research. The API provides historical search, engagement metrics, and user data at volumes that would be impossible to collect manually.\n\nAI \u0026 Sentiment Analysis\n\nFeed real-time X data into machine learning models, language models, and sentiment analysis systems. Applications range from audience monitoring to discourse analysis to predictive analytics.\n\nX API Pricing: The 2026 Tier System\n\nAs of today, X is testing a revolutionary pay-per-use pricing model, but the traditional tier system remains the active standard. Here’s what you need to know about both approaches.\n\n💲 Current Standard Pricing\n\nThe tiered pricing structure consists of three main tiers, each designed for different scales of usage:\n\nTier\tMonthly Cost\tAnnual Savings\tBest For\tKey Capabilities\nFree\t$0\t—\tDevelopment and testing only\t500 posts/month, read-heavy, 1 req per 24hrs on most endpoints, limited endpoint access\nBasic\t$200\t$2,100/year (12.5% savings)\tSmall projects, content monitoring, single app usage\t15,000 read requests/month, 50,000 write requests/month, standard endpoint access\nPro\t$5,000\t$54,000/year (10% savings)\tGrowing applications, full feature set, mission-critical systems\t1,000,000 read requests/month, 300,000 write requests/month, full endpoint access, priority support\nEnterprise\t$42,000+\tCustom pricing\tLarge-scale systems, dedicated infrastructure\tCustom rate limits, SLAs, dedicated support, advanced features, volumetric discounts\n\nWhile Basic is 25x cheaper ($200 vs $5,000), Pro gives you 100x more read capacity and unlocks critical features like full-archive search and real-time filtering. Most companies scale directly from Free → Basic → Pro.\n\n💢 What Changed: The Death of Free Access\n\nThe shift from free to paid access served two purposes: generating revenue from the platform’s data value, and reducing abuse. Free API access enabled spam bots, data scrapers, and malicious automation at scale.\n\nAvailable with Free Tier\n\n500 posts per calendar month (about 16-17 per day)\nRate-limited to 1 request per 24 hours on most endpoints\nNo posting, liking, or engaging – read-only access to public data only\nCannot write posts, create resources, or perform account actions\nNo access to trends, direct messaging, or advanced features\n\nReal-world impact: The Free tier is genuinely only for proof-of-concept work and local development testing. For any production application, you must budget for the Basic tier at minimum ($200/month).\n\n🔮 The New Pay-Per-Use Model (Beta)\n\nIn November 2025, X launched a closed beta for a revolutionary pricing approach: pay only for what you use. Instead of fixed monthly fees, developers in the beta pay individual prices for different API operations – similar to AWS or Google Cloud’s consumption-based billing.\n\nHow Pay-Per-Use Works\n\nThe beta pricing model assigns specific costs to each operation type. For example:\n\nReading a post costs a specific price (varies by operation)\nSearching posts costs more (higher computational load)\nCreating a post has its own rate\nAccessing trends uses a different pricing tier\nDirect messaging has separate pricing\n\nImportant Note: The pay-per-use model is in closed beta as of December 2025. Plan your implementation based on current tier pricing, but monitor the official X Developer Twitter (@XDevelopers) for announcements about broader rollout.\n\nAll developers in the closed beta receive a $500 voucher to experiment before committing to production usage.\n\nPotential Benefits Over Fixed Tiers\n\nNo payment for unused capacity (unlike fixed tier pricing)\nAbility to scale up or down without tier changes\nGranular control over spending per feature\nMore transparent cost attribution\n\nX provides an interactive API cost calculator where you can input your expected usage patterns and see exactly what you’d pay.\n\nX Authentication: How to Prove Your Identity\n\nBefore making any API request, you need to authenticate – prove to X that you’re authorized to access specific data. The X API v2 supports multiple authentication methods, each suited for different scenarios.\n\n🔐 OAuth 2.0 Authorization Code (Recommended for New Development)\n\nOAuth 2.0 is the modern standard for authentication and is recommended for all new development. It’s more secure than legacy approaches and handles both public and private user data.\n\nWhen to Use OAuth 2.0\n\nBuilding new applications from scratch\nWeb applications and mobile apps requiring user login\nAccessing private user data (private lists, draft posts)\nPerforming actions on behalf of users (posting, liking, following)\n\nHow It Works\n\nUser clicks “Sign in with X” in your application\nYour app redirects them to X’s authorization page\nUser grants permissions (you define the scopes requested)\nX returns an authorization code\nYour app exchanges the code for an access token\nYou use this token for API requests on behalf of the user\n\nRequired credentials: Client ID, Client Secret, and redirect URI (configured in your developer app settings).\n\n🔑 OAuth 1.0a User Context (Legacy, Still Supported)\n\nThis older method is still supported but not recommended for new development. OAuth 1.0a authenticates on behalf of a specific user and is primarily useful for legacy applications.\n\nPosted tweets or direct messages on a user’s behalf\nRetrieving a specific user’s private timeline\nManaging user-specific resources\n\nWhy it’s less preferred: More complex to implement, less secure than OAuth 2.0, and X is gradually moving developers toward OAuth 2.0.\n\n👥 Bearer Token (App-Only, Best for Public Data)\n\nBearer token authentication is the simplest approach for accessing public data without user context. Use this when you’re building tools that only need public information.\n\nWhen to Use\n\nSearching for public posts\nRetrieving public user profiles\nAccessing publicly available trends\nBuilding analytics tools for public content\n\nHow it works: Provide your app’s credentials (API Key and Secret), receive a Bearer Token, include the token in API request headers. No user involvement required.\n\nSecurity Best Practice: Store all credentials (API Keys, Secrets, Bearer Tokens) in environment variables or secure configuration files – never hardcode them into your application code. If credentials are exposed, regenerate them immediately in the developer portal.\n\nX API v2: Endpoints and Resource Types\n\nThe X API comes in two versions: v1.1 (legacy, no longer updated) and v2 (current standard). All new projects should use v2, which provides access to endpoints organized by resource type – Posts, Users, Trends, Engagement, and more. Each resource supports specific operations (read, create, update, delete) depending on your tier and permissions.\n\nPosts (Tweets) – The Core Resource\n\nWhat you can do: Retrieve posts, search for posts matching criteria, create new posts, delete posts, access timelines\n\nCommon endpoints:\n\nGET /2/tweets — Lookup specific posts by ID\nGET /2/tweets/search/recent — Search recent posts (last 7 days)\nPOST /2/tweets — Create a new post\nGET /2/users/:id/tweets — Get posts from a specific user\n\nPosts are the foundation of the X API. Almost every use case involves retrieving, searching, or creating posts in some way.\n\nUsers – Profile Information\n\nWhat you can do: Access user profiles, get follower information, search for users\n\nCommon endpoints:\n\nGET /2/users/by/username/:username — Get user by handle\nGET /2/users/:id — Get user by ID\nGET /2/users/:id/followers — Get user’s followers\n\nUser endpoints let you build profiles, track followers, and verify account information without manually visiting X.\n\nEngagement – Likes, Retweets, Replies\n\nWhat you can do: See engagement metrics, track who liked or retweeted posts, manage user engagement\n\nCommon endpoints:\n\nGET /2/tweets/:id/liked_by — See who liked a post\nPOST /2/users/:id/likes — Like a post\nGET /2/tweets/:id/quote_tweets — Get quote tweets (retweets with added commentary)\n\nEngagement endpoints power analytics dashboards and community management tools by tracking interactions and responses to content.\n\nLists – User Collections\n\nWhat you can do: Create and manage curated lists of users, access posts from list members\n\nCommon endpoints:\n\nGET /2/lists — List your lists\nPOST /2/lists/:id/members — Add member to list\nGET /2/lists/:id/tweets — Get posts from list members\n\nLists are useful for organizing accounts and creating targeted feeds without following everyone publicly.\n\nTrends – What’s Happening Now\n\nWhat you can do: Access real-time trending topics and hashtags\n\nCommon endpoints:\n\nGET /2/trends — Get trending topics\nGET /2/users/personalized_trends — Get personalized trending topics for a user\n\nTrends data powers discovery features and helps applications surface relevant conversations happening right now on X.\n\nFiltered Stream – Real-Time Data\n\nWhat you can do: Subscribe to a real-time stream of posts matching your rules, receive notifications as posts are created\n\nCommon endpoints:\n\nGET /2/tweets/search/stream — Connect to filtered stream\nPOST /2/tweets/search/stream/rules — Create or modify stream rules\n\nFiltered stream is powerful for applications that need real-time updates (monitoring brand mentions, tracking specific keywords, etc.) without constantly polling the search endpoint.\n\nDirect Messages – Private Communication\n\nWhat you can do: Send and receive direct messages, manage conversations\n\nCommon endpoints:\n\nGET /2/dm_events — Retrieve direct messages\nPOST /2/dm_conversations/:id/messages — Send a message\n\nDirect message endpoints enable customer support automation and notification systems built on top of X.\n\nNote: Not all endpoints are available on all tiers. Free tier access is heavily restricted. The Basic tier ($200/month) provides access to most commonly used endpoints. Check the official X API documentation to verify endpoint availability for your tier before building features.\n\nRate Limits and Quota Management\n\nThe X API v2 enforces two types of limits: request rate limits (per 15-minute windows) and monthly post consumption limits (tracked across the calendar month).\n\n📨 Request Rate Limits (Per 15-Minute Windows)\n\nDifferent endpoints have different rate limits based on your tier.\n\nEndpoint Example\tFree Tier\tBasic Tier\tPro Tier\nGET /2/users/:id (lookup user)\t1 req / 24 hours\t100 requests / 24 hours\t900 requests / 15 mins\nPOST /2/tweets (create post)\tNot available\tAvailable\tAvailable\nGET /2/tweets/search/recent\tLimited\tAvailable\t450 requests / 15 mins\n\nFree tier uses per-endpoint limits measured in 24-hour windows (very restrictive). Basic and Pro tiers use 15-minute windows, which are much more generous because the window resets frequently.\n\n📊 Monthly Post Consumption Limits\n\nSeparate from request rate limits, search and stream endpoints consume from a monthly “post quota.” Once consumed, you can’t query these endpoints until the next calendar month.\n\nFree tier: 10,000 posts/month\nBasic tier: 500,000 posts/month\nPro tier: 2,000,000+ posts/month\n\nThese limits apply specifically to: recent search, filtered stream, user timelines, and mention timelines.\n\n🚨 What Happens When You Hit a Limit\n\nWhen you exceed a rate limit, X returns an HTTP 429 (Too Many Requests) error response with a Retry-After header indicating how many seconds to wait before retrying.\n\nWhen you exhaust your monthly post quota, X returns a 429 error indicating the quota limit is reached. You’re blocked from querying that endpoint until the next calendar month begins.\n\nBest Practice: Implement exponential backoff and retry logic in your application. When you receive a 429 error, wait the duration specified in Retry-After before retrying. For monthly quota exhaustion, cache your search results aggressively to avoid querying the same data repeatedly.\n\nFive Optimization Strategies: Reduce Costs and Improve Performance\n\nWith limited rate limits and monthly quotas, optimization directly impacts your application’s capability and cost. Here are proven strategies to reduce API consumption.\n\n1. Use Field Selection to Reduce Response Size\n\nBy default, API responses return many fields you might not need. The fields parameter lets you request only specific data.\n\nInstead of:\n\nGET /2/tweets?ids=TWEET_ID\n\nUse:\n\nGET /2/tweets?ids=TWEET_ID\u0026tweet.fields=created_at,public_metrics\u0026expansions=author_id\u0026user.fields=username\n\nThe second request returns only the data you need, resulting in smaller responses and faster processing.\n\n2. Implement Application-Level Caching\n\nCache API responses in your database or cache layer with appropriate TTL values:\n\nStatic content (usernames, display names): 24 hours\nSemi-dynamic content (post text, engagement counts): 6 hours\nReal-time content (trending topics): 30 minutes to 1 hour\n\nReal impact: A dashboard that previously fetched trending posts every 15 minutes can drop to every 2 hours with caching, reducing daily API calls from 96 to 12—an 87.5% reduction.\n\n3. Batch Requests Whenever Possible\n\nSome endpoints accept multiple IDs in a single request.\n\nInstead of 3 separate requests:\n\nGET /2/tweets?ids=ID1 GET /2/tweets?ids=ID2 GET /2/tweets?ids=ID3\n\nUse 1 batch request:\n\nGET /2/tweets?ids=ID1,ID2,ID3\n\nThis reduces your consumption from 3 requests to 1, saving 67% of your quota.\n\n4. Use Backoff and Retry Logic\n\nWhen hitting rate limits or temporary errors, retry with exponential backoff:\n\nWait 1 second before retry 1\nWait 2 seconds before retry 2\nWait 4 seconds before retry 3\nWait 8 seconds before retry 4\n\nThis prevents hammering the API and gives temporary issues time to resolve.\n\n5. Consider Filtered Stream Instead of Polling\n\nInstead of repeatedly asking “Are there new posts matching my criteria?” (polling), subscribe to webhooks where X pushes notifications when matching posts appear.\n\nPolling approach: Check every 5 minutes = 288 checks/day. Most checks return “no new data” (wasted quota).\n\nFiltered stream approach: Receive notification only when data changes. Zero wasted requests. Real-time updates.\n\nCombined Impact: Applying all five optimization strategies together can reduce your API consumption 70-90% compared to unoptimized code. A dashboard consuming 5,000 units daily can drop to 500-1,500 units through optimization alone, without requesting a quota increase.\n\nError Handling: Common Issues and Solutions\n\nUnderstanding common error codes helps you debug and recover gracefully.\n\nError Code\tHTTP Status\tCause\tSolution\nInvalid Request\t400\tMalformed request or missing required fields\tReview request format, ensure all required parameters present\nUnauthorized\t401\tMissing or invalid credentials\tCheck that Bearer Token or OAuth tokens are correct and not expired\nForbidden\t403\tAuthenticated but not authorized (insufficient permissions)\tRequest additional scopes in your OAuth flow, get user re-approval\nNot Found\t404\tResource doesn’t exist (invalid ID, deleted content)\tVerify resource ID is correct and still exists\nRate Limited\t429\tToo many requests within the time window\tImplement backoff, wait for rate limit window to reset (check Retry-After header)\nQuota Exceeded\t429\tMonthly post quota exhausted\tWait until next calendar month, or request quota increase\n\n🔧 Parsing Error Responses\n\nWhen an error occurs, X returns JSON with details:\n\n{ \"errors\": [ { \"message\": \"The `ids` query parameter value is invalid\", \"type\": \"https://api.x.com/2/problems/invalid-request\" } ] }\n\nBest practice: Always wrap API calls in try-catch blocks and log errors to a monitoring system. This helps you identify patterns and debug issues faster.\n\nGet Your X API Key: Step-by-Step\n\nThe process has simplified significantly compared to the old Twitter API, but there are still critical steps:\n\n🔗 Step 1: Create a Developer Account\n\nNavigate to X Developer Portal\nSign in with your X account (or create one)\nComplete developer profile setup\nAwait approval (typically 5-10 minutes)\n\nFirst-time users will see an onboarding wizard that guides you through creating your first Project and App. If you don’t see this, click “Projects \u0026 Apps” in the left sidebar.\n\n📂 Step 2: Create a Project\n\nA Project is a container for one or more Apps. Think of it as a workspace.\n\nIn the Developer Portal, click “Create Project”\nName your project (e.g., “Analytics Dashboard”)\nDescribe your use case\nSelect your access tier (start with Free for testing)\n\nBy default, you’re on the Free tier. To upgrade: Go to the “Products” section in the developer portal → Find the X API v2 card and click “View Access Levels” → Select the tier you want\n\n🔨 Step 3: Create an App\n\nWithin your project, click “Create App”\nChoose an App name (e.g., “Brand Monitor Bot”)\nAccept terms\nGenerate your API keys\n\n🔑 Step 4: Access Your Credentials\n\nNavigate to your app’s “Keys and Tokens” tab. You’ll find:\n\nAPI Key (Consumer Key): A public identifier for your app. Safe to share in source code.\nAPI Secret Key (Consumer Secret): Keep this secure! Never expose it in client-side code or version control.\nBearer Token (for app-only auth): Used for app-only authentication (read-only, no user context needed). Also keep secure.\nClient ID \u0026 Secret (for OAuth 2.0): OAuth 2.0 credentials. Only visible if you enable OAuth 2.0 in your app settings.\n\nCritical Security Warning: These credentials display only once. Copy them immediately to a secure location (password manager, encrypted file, environment variables). Never commit to version control or publish publicly. If exposed, regenerate immediately.\n\nRecommended Tools \u0026 Resources\n\nOfficial X API Documentation: The authoritative source for all endpoints, parameters, and examples.\nRate Limits Reference: Complete breakdown of all endpoint rate limits by tier.\nX Postman Collection: Pre-built API requests for testing in Postman. Eliminates manual endpoint crafting.\nX Developer Community Forum: Connect with other developers, ask questions, report issues.\nX Dev GitHub: Official sample code, SDKs, and libraries for Python, JavaScript, Java, and more.\nClient Libraries: Official and community-maintained SDKs in multiple languages. Saves time vs. raw HTTP requests.\n\nFAQ: Common Questions About the X API\n\nThe Free tier is available but extremely limited (500 posts/month, 1 request per 24 hours on most endpoints). It’s suitable only for development and proof-of-concept work. For production applications, the Basic tier ($200/month) is the practical minimum.\n\nOAuth 2.0 authenticates on behalf of a specific user and grants permission scopes. Bearer token (app-only) authenticates as your application to access public data. Use OAuth 2.0 when users need to login and grant permissions; use Bearer tokens for public data without user involvement.\n\nOAuth tokens don’t expire automatically—they remain valid until explicitly revoked or regenerated. Best practice: regenerate tokens every 90 days for security. If you suspect a token is compromised, regenerate immediately.\n\nYou receive an HTTP 429 response with a Retry-After header. Implement exponential backoff and retry after the specified duration. Your request is rejected, so no quota is consumed for failed attempts.\n\nYes. Submit a quota increase request through the Google Cloud Console. Provide your use case, user count, and realistic usage estimates. Google reviews and approves/denies based on compliance and legitimacy. Quota increases are free.\n\nFree tier: development and testing only. Basic ($200/month): most real-world projects (content monitoring, automation, small applications). Pro ($5,000/month): high-traffic applications, APIs serving many end users. Enterprise ($42k+): mission-critical systems requiring SLAs and dedicated support.\n\nNeed more help? Check the X Developer Documentation or visit the X Developer Community Forum to connect with other developers and get answers from the community.\n\nNext Steps\n\nBuilding with the X API is straightforward once you understand the pricing, rate limits, and optimization strategies. Whether you’re monitoring brand conversations, automating content, or analyzing trends, the API provides everything you need. Start with a small project, implement the five optimization strategies early, and grow from there.\n\nThe difference between a scalable application and one that struggles often comes down to implementation details. Plan thoroughly, optimize aggressively from day one, and your X integration will thrive. Ready to get started? Head to developer.x.com, create your first project, and begin building!\n\nSupport\n\nIf you have read the instructions but still have any questions, you can always contact our support specialists or read articles in the Help Center.\n\nAsk for help\n\nForum\n\nContact Elfsight peers, share your thoughts, and participate in community activities!\n\nJoin us\n\nWishlist\n\nVisit Wishlist to offer features that you need but the Form Builder doesn’t have yet.\n\nShare Your Idea\n\nHi, I’m Kristina – content manager at Elfsight. My articles cover practical insights and how-to guides on smart widgets that tackle real website challenges, helping you build a stronger online presence.",
    "link": "https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/",
    "snippet": "For production applications, the Basic tier ($200/month) is the practical minimum. What's the difference between OAuth 2.0 and Bearer tokens?",
    "title": "How to Get X API Key: Complete 2026 Guide to Pricing ... - Elfsight"
  },
  {
    "content_readable": "Updated February 2026 — X just launched pay-as-you-go API pricing on February 6. Here's what every tier costs, what changed, and what it means for indie builders.\n\nIf you're building anything that touches X data (a social listening tool, a bot, a startup that depends on post volume) you've probably had a rough couple of years. The X API has been through more pricing changes since Elon Musk's acquisition than most platforms see in a decade.\n\nThe latest change landed on February 6, 2026: X announced a pay-as-you-go model, moving away from fixed monthly tiers for some developers. It's the most significant structural shift since the original price hike that doubled Basic from $100 to $200.\n\nThis guide covers everything: current pricing, what pay-as-you-go actually means in practice, who it helps, and whether alternatives are now worth a serious look.\n\nCurrent X API Pricing Tiers (2026)\n\nThe fixed tier system remains available alongside the new pay-as-you-go option. Here's where things stand:\n\nTier\tMonthly Price\tAnnual Price\tRead Requests\tWrite Requests\nFree\t$0\t$0\tWrite-only (no reads)\t500 posts/month\nBasic\t$200\t$2,100 (save 12.5%)\t15,000/month\t50,000/month\nPro\t$5,000\t$54,000 (save 10%)\t1,000,000/month\tHigher limits\nEnterprise\t$42,000+/month\tCustom\tCustom\tCustom + $1/month per connected account\n\nWhat Each Tier Actually Gets You\n\nFree is write-only and essentially useless for anything that needs to read or analyse posts. 500 writes per month is enough for a simple bot that posts updates, and nothing more. If you were on the old generous free tier, those days are long gone.\n\nBasic at $200/month is the entry point for any real use case. You get 15,000 read requests per month and 50,000 writes.\n\nThat sounds reasonable until you start building something with meaningful volume, 15,000 reads disappears fast if you're doing any kind of monitoring or search. For context, that's roughly 500 reads per day.\n\nPro at $5,000/month is where the cliff edge is. There's no middle ground between $200 and $5,000. One of the most complained-about aspects of the current pricing structure. One million reads per month unlocks at this tier, along with full-archive search and real-time filtering. For most indie builders, this price point is simply out of reach.\n\nEnterprise at $42,000+/month is for large organisations that need complete data access, dedicated support, and custom terms. The additional $1/month per connected account fee is notable for platforms that authenticate many users.\n\nThe Big February 2026 Change: Pay-As-You-Go\n\nOn February 6, 2026, X announced a shift to consumption-based billing, similar to how AWS or Google Cloud charge for compute.\n\nHere's how it works:\n\nInstead of a fixed monthly fee, developers buy credits and spend them per API operation\nDifferent operations have different costs. Reading a post, searching posts, and writing all carry separate prices\nLegacy free tier users who were still active will move to pay-as-you-go and receive a one-time $10 voucher\nBasic and Pro fixed plans remain available for those who prefer predictable billing\nDevelopers can opt into pay-as-you-go from their existing fixed plan\n\nX also added auto top-up settings (credits purchase automatically when balance runs low) and spending caps (requests stop when a monthly limit is hit), which addresses one of the biggest complaints about the old system, the fear of runaway costs.\n\nWho this helps: Developers with inconsistent or low usage who were previously forced into a $200/month commitment even for occasional API calls. If you use the API sporadically, pay-as-you-go could be significantly cheaper.\n\nWho this doesn't help: Anyone with consistent high-volume usage who needs predictable costs. Fixed tiers remain the better option for production apps with steady read volumes.\n\nThe catch: Early analysis suggests pay-as-you-go isn't necessarily cheaper than fixed tiers at equivalent usage levels. The $200 Basic plan gives 15,000 reads per month. Plugging similar usage into the pay-as-you-go model suggests costs could run higher for developers who use the API consistently rather than sporadically.\n\nHow We Got Here: A Timeline\n\nIt's worth understanding how X arrived at this point, because the pricing trajectory matters for how much trust to place in the current structure.\n\nPre-2023 (Twitter era): The free tier offered 500,000 tweets per month. Premium plans ran from $149 to $2,499 per month. The API was a developer playground that enabled thousands of research projects, tools, and businesses.\n\nFebruary 2023: Elon Musk's X ended free API access entirely, introducing the tiered system. The move was framed as tackling the bot problem but was widely read as a revenue play, particularly given X's financial position at the time.\n\n2024: Basic doubled from $100 to $200. The free tier's post limit was cut from 1,500 to 500 per month. Enterprise fees of $1/month per connected account were introduced.\n\nNovember 2025: X launched a closed beta for pay-as-you-go pricing, giving developers in the beta a $500 voucher to experiment.\n\nFebruary 6, 2026: Pay-as-you-go pricing announced broadly, with the fixed tier system remaining alongside it.\n\nThe pattern is consistent: prices up, limits down, with periodic structural changes that keep developers guessing. As indie builder Daniel Nguyen, whose KTool app was directly affected by the original hike, put it: X carries \"a huge risk\" for makers because the platform doesn't offer the same stability or commitment to its developer community as other API providers.\n\nThe Indie Hacker Reality\n\nThe gap between $200 and $5,000 per month is where most of the damage has been done.\n\nA developer building a social listening tool for small businesses at $20 per month per customer needs 250 customers just to cover a Pro plan subscription. That's a real business. And most side projects never get there.\n\nThe community reaction when Basic doubled was telling. As one indie hacker put it at the time: \"This pricing update does not make sense in regards to getting rid of bots. They mostly want to keep their data because that's the most valuable asset they have in the age of AI.\"\n\nThat last point is key. X's data is genuinely valuable for training AI models. The pricing changes reflect that value being recognised and monetised, not just a response to the bot problem.\n\nThe real cost for the ecosystem has been the chilling effect. Tools get shut down before they launch. Researchers work around the API rather than through it. And the platform loses the developer goodwill that made Twitter's API one of the most-used in the world.\n\nShould You Consider Alternatives?\n\nThe third-party X API market has grown significantly since the original price hikes. Options include:\n\nScraping-based alternatives (various providers): Often 90-96% cheaper than the official API, but carry terms of service risk and can be unreliable as X updates its platform\nSocial data aggregators: Platforms that resell X data alongside other social networks, typically starting around $49 to $200/month with more predictable pricing\nPurpose-built tools: For specific use cases like social listening or analytics, off-the-shelf SaaS tools may be cheaper than building on the raw API\n\nBefore switching, factor in integration complexity. Stripe's developer experience warning applies here too. X's official API is well documented and switching to unofficial alternatives introduces reliability and compliance risk that could be more expensive in the long run.\n\nFor production applications that depend on X data, the official API remains the only genuinely safe option. For experimentation, research, or projects that can tolerate some instability, alternatives are worth evaluating.\n\nWhat to Do Right Now\n\nIf you're currently on a fixed Basic or Pro plan: Review whether pay-as-you-go would be cheaper for your actual usage pattern. If your API calls are inconsistent or low-volume, it might be. If you're consistently hitting your read limits, stay on the fixed plan.\n\nIf you're building something new: Factor the full API cost into your unit economics before committing. At $200/month minimum for any meaningful read access, X data needs to be central to your value proposition to justify the cost at early stage.\n\nIf you were on the legacy free tier: You'll be moved to pay-as-you-go with a $10 voucher. Set a spending cap immediately to avoid surprise bills while you evaluate your options.\n\nIf you're at $5,000/month or above: You already know this, but it's worth renegotiating directly with X's enterprise team, custom pricing exists and the $42,000+ floor for enterprise has room to move for the right use case.\n\nThe X API story isn't over. Pay-as-you-go is the latest chapter in an ongoing restructuring of how X monetises its data. Whether it signals a more developer-friendly direction or simply a new way to extract more revenue remains to be seen.\n\nFor now, the best approach is to treat X API costs as a genuine line item in your business model (not an afterthought) and build accordingly.\n\nOriginal story from January 2025 below.\n\nThe X API, a crucial tool for many startups and small businesses, is about to get a lot more expensive.\n\nIn a recent forum post, the X team announced that developers on the platform's Basic usage tier will see their monthly bill double from $100 to $200. This price hike is a significant blow to indie hackers who have long relied on the X API. Before the introduction of tiered pricing, many makers paid nothing (or next to nothing) to use the service.\n\nThe move comes as X, under Elon Musk's ownership, continues to grapple with its bot problem and search for new revenue streams. The collateral damage to legitimate startups is concerning. Unlike other platform providers, X doesn't seem to offer the same stability or investment in its developer community. The abrupt price hikes, coupled with the platform's ongoing struggles, have left many small businesses and indie projects in a precarious position.\n\nFor indie hackers and small startups that have come to rely on the X API, this price hike remains a tough pill to swallow. As the platform continues to evolve under new ownership, the future looks uncertain for the many developers who have built their businesses on X's data and functionality.",
    "link": "https://www.wearefounders.uk/the-x-api-price-hike-a-blow-to-indie-hackers/",
    "snippet": "Current X API Pricing Tiers (2026) ; Free, $0, $0 ; Basic, $200, $2,100 (save 12.5%) ; Pro, $5,000, $54,000 (save 10%) ; Enterprise, $42,000+/month ...",
    "title": "X API Pricing in 2026: Every Tier Explained (And the New Pay-As ..."
  },
  {
    "content_readable": "Crawler is not allowed!",
    "link": "https://devcommunity.x.com/",
    "snippet": "Hello X Developers, We're thrilled to officially announce the launch of our new X API Pay-Per-Use pricing model! This update is designed to empower the heart of ...",
    "title": "X Developers - Twitter"
  },
  {
    "content_readable": "Why X (Twitter) Data APIs Matter in 2026\n\nX (formerly Twitter) remains one of the most valuable sources of real-time public data. With over 500 million monthly active users, the platform generates massive amounts of data that businesses use for:\n\nSocial Listening: Monitor brand mentions, sentiment, and trends\nInfluencer Marketing: Identify and analyze influencers in your niche\nMarket Research: Track industry conversations and competitor activity\nLead Generation: Find potential customers based on their tweets and interests\nContent Strategy: Understand what content resonates with your audience\nCrisis Management: Real-time monitoring for brand reputation\n\nHowever, accessing X data programmatically has become increasingly challenging since the platform's API changes in 2023. This guide compares the best alternatives for developers who need reliable X data access.\n\nHow We Evaluated These Providers\n\nWe tested each provider based on:\n\nData Coverage: Users, tweets, followers, communities, trends\nAPI Performance: Response times and reliability\nPricing: Cost per request and value for money\nRate Limits: Requests per minute/day\nDocumentation: Quality and ease of integration\nData Freshness: Real-time vs cached data\nCompliance: Terms of service and legal considerations\n\n1. Netrows\n\nBest For: Developers needing comprehensive X + LinkedIn data\nStarting Price: $49/month\nX Endpoints: 26 endpoints\nFree Trial: 100 credits\n\nOur Top Pick for Value \u0026 Coverage\n\nX Data Coverage\n\nUsers: Profile info, about, batch lookup, tweets, followers, following, mentions, verified followers\nTweets: Tweet details, replies, quotes, retweeters, threads, articles, search\nLists: List followers and members\nCommunities: Community info, members, moderators, tweets, search\nTrends: Trending topics by location\nSpaces: Space details and participants\n\nPros\n\nMost comprehensive X API coverage (26 endpoints)\nFlexible credit pricing: 1-50 credits per call based on data volume\nCombined with 48 LinkedIn endpoints (74+ total)\nReal-time data, not cached\nFast response times (\u003c2 seconds)\nExcellent documentation with code examples\nNo annual contracts required\n99.9% uptime SLA\n\nCons\n\nNewer X API offering (launched December 2025)\nNo historical tweet archive access\n\nPricing\n\nX endpoints use tiered credit pricing based on data volume: single-item lookups (user info, trends, spaces) cost 1 credit, paginated endpoints returning 20 items cost 5 credits, batch endpoints (up to 100 items) cost 25 credits, and bulk endpoints (followers, following) cost 50 credits but return 200 profiles per request. With the $49/month Starter plan (10,000 credits), you get thousands of X API calls.\n\n2. X (Twitter) Official API\n\nBest For: Enterprise companies with large budgets\nStarting Price: $100/month (Basic), $5,000/month (Pro)\nFree Tier: Very limited (1,500 tweets/month read)\n\nPros\n\nOfficial data source\nFull compliance with X terms\nAccess to full archive (Enterprise)\nStreaming API available\n\nCons\n\nExtremely expensive ($5,000-$42,000/month for useful access)\nSevere rate limits on lower tiers\nComplex approval process\nFree tier practically unusable\nFrequent API changes and deprecations\nPoor developer experience\n\nPricing Tiers\n\nFree: 1,500 tweets/month read, 1 app\nBasic ($100/mo): 10,000 tweets/month read\nPro ($5,000/mo): 1M tweets/month read\nEnterprise ($42,000+/mo): Full access, streaming\n\n3. RapidAPI Twitter APIs\n\nBest For: Quick prototyping and testing\nStarting Price: Varies by provider ($0-$500/month)\nFree Tier: Limited requests\n\nPros\n\nMultiple providers to choose from\nEasy to test different options\nSome free tiers available\nUnified billing through RapidAPI\n\nCons\n\nInconsistent data quality across providers\nMany providers are unreliable\nLimited support\nNo SLA guarantees\nProviders frequently go offline\n\n4. Apify Twitter Scrapers\n\nBest For: One-time data collection projects\nStarting Price: $49/month (platform fee) + usage\nFree Tier: $5 free credits\n\nPros\n\nFlexible scraping options\nCan customize data extraction\nGood for bulk historical data\nMultiple Twitter actors available\n\nCons\n\nNot a real-time API\nScraping can be unreliable\nMay violate X terms of service\nRequires technical setup\nRate limited by X's anti-scraping measures\n\n5. Brandwatch\n\nBest For: Enterprise social listening\nStarting Price: Custom (typically $800+/month)\nFree Tier: Demo only\n\nPros\n\nComprehensive social listening platform\nHistorical data access\nSentiment analysis included\nMulti-platform coverage\n\nCons\n\nVery expensive\nNot developer-focused (UI-first)\nLimited API access\nAnnual contracts required\nOverkill for simple data needs\n\n6. Sprout Social\n\nBest For: Social media management teams\nStarting Price: $249/month\nFree Tier: 30-day trial\n\nPros\n\nAll-in-one social management\nGood analytics dashboard\nTeam collaboration features\nPublishing and scheduling\n\nCons\n\nNot an API provider\nLimited data export options\nExpensive for data access alone\nFocused on marketing, not developers\n\n7. Tweepy (Python Library)\n\nBest For: Python developers using official API\nStarting Price: Free (library) + X API costs\nFree Tier: Open source\n\nPros\n\nFree and open source\nWell-documented\nActive community\nEasy to use for Python developers\n\nCons\n\nStill requires X API access (expensive)\nPython only\nSubject to X API limitations\nNo additional data beyond official API\n\nPricing Comparison Table\n\nProvider\tStarting Price\tX Endpoints\tReal-time\nNetrows\t$49/mo\t26\nX Official (Pro)\t$5,000/mo\tFull\nRapidAPI\tVaries\tVaries\nApify\t$49/mo+\tScraping\nBrandwatch\t$800+/mo\tLimited\n\nWhich Provider Should You Choose?\n\nFor Developers \u0026 Startups\n\nRecommendation: Netrows\nBest value with 26 X endpoints at half-price credits. Combined with LinkedIn data, it's the most comprehensive B2B data API at $49/month. Perfect for building applications that need both professional and social data.\n\nFor Enterprise Social Listening\n\nRecommendation: X Official API (Enterprise) or Brandwatch\nIf you need full historical access, streaming, and have budget for $42,000+/year, the official API is the safest choice. Brandwatch is better if you need a complete social listening platform with analytics.\n\nFor Quick Prototyping\n\nRecommendation: Netrows or RapidAPI\nNetrows offers 100 free credits to test. RapidAPI has various free tiers but quality varies significantly.\n\nFor One-Time Data Collection\n\nRecommendation: Apify\nIf you need bulk historical data for a one-time project and don't need real-time access, Apify scrapers can work. Be aware of potential ToS issues.\n\nFrequently Asked Questions\n\nIs it legal to access X data through third-party APIs?\n\nYes, as long as the provider has legitimate access to the data. Providers like Netrows access publicly available data in compliance with applicable laws. Always check the provider's terms of service and ensure your use case is compliant.\n\nWhy is the official X API so expensive?\n\nX significantly increased API pricing in 2023 to monetize data access. The Basic tier ($100/mo) is too limited for most use cases, pushing developers to Pro ($5,000/mo) or Enterprise ($42,000+/mo) tiers.\n\nWhat X data can I access through Netrows?\n\nNetrows provides 26 X endpoints covering: user profiles, followers, following, tweets, replies, quotes, retweeters, lists, communities, trends, and spaces. All data is fetched in real-time.\n\nCan I get historical tweets?\n\nMost third-party providers (including Netrows) provide recent tweets and user timelines. For full historical archive access (tweets from years ago), you need X's Enterprise API tier.\n\nWhat's the best X API for influencer analysis?\n\nNetrows is ideal for influencer analysis with endpoints for followers, following, verified followers, engagement metrics, and user search. You can identify influencers, analyze their audience, and track their content.\n\nDo I need both X and LinkedIn data?\n\nFor B2B use cases, combining X and LinkedIn data provides the most complete picture. LinkedIn for professional background, X for real-time activity and interests. Netrows is the only provider offering both in one API.\n\nTry the Best X Data API\n\nNetrows offers 26 X endpoints plus 48 LinkedIn endpoints in one API. Get started with 100 free credits today.",
    "link": "https://netrows.com/blog/top-twitter-x-data-api-providers-2026",
    "snippet": "3. RapidAPI Twitter APIs ; Best For: Quick prototyping and testing ; Starting Price: Varies by provider ($0-$500/month) ; Free Tier: Limited ...",
    "title": "Top Twitter/X Data API Providers Compared (2026) - Netrows"
  },
  {
    "content_readable": "Crawler is not allowed!",
    "link": "https://devcommunity.x.com/t/announcing-the-x-api-pay-per-use-pricing-pilot/250253",
    "snippet": "Pricing Details ; Post (Read): $0.005 per Post fetched. ; User (Read): $0.01 per User fetched. ; DM Event (Read): $0.01 per DM Event fetched.",
    "title": "Announcing the X API Pay-Per-Use Pricing Pilot"
  },
  {
    "content_readable": "Does Twitter API Cost Money?\n\nSo, you’re diving into the world of Twitter’s API and wondering about the cost? Let’s break it down in a way that’s easy to understand. The short answer is that it depends on your usage and the level of access you need. Twitter, now known as X, has restructured its API offerings, and understanding the different tiers is crucial to avoid unexpected charges. In the past, Twitter offered more generous free access, but those days are largely gone. Nowadays, accessing the Twitter API typically involves some level of payment, especially if you’re building applications or tools that rely heavily on real-time data or large-scale data analysis. The main reason for this shift is to control the usage and ensure the stability of their platform. Think of it this way: providing free, unlimited access to their API could lead to abuse and strain their infrastructure. By implementing a paid model, Twitter aims to maintain a sustainable ecosystem for developers while also generating revenue. But don’t worry, there are still some options that might fit your budget, depending on what you’re trying to achieve.\n\nTable of Contents\n\nUnderstanding Twitter API Pricing Tiers\nFactors Influencing the Cost of Twitter API\nHow to Check Twitter API Pricing\nAlternatives to Paid Twitter API Access\nTips for Minimizing Twitter API Costs\nConclusion: Is the Twitter API Worth the Cost?\n\nUnderstanding Twitter API Pricing Tiers\n\nTo really grasp whether the Twitter API costs money for you, you’ve got to get familiar with the different pricing tiers they offer. Basically, Twitter provides various levels of access, each tailored to different needs and use cases, and each comes with its own price tag. The free tier, which was available in the past, has been significantly limited. It primarily caters to very basic use cases, such as academic research or personal projects with minimal data requirements. If you’re planning anything beyond simple, infrequent requests, you’ll likely need to consider a paid plan. The basic tier is designed for hobbyists and smaller projects. This tier usually includes access to essential endpoints, allowing you to read and write tweets, follow users, and perform basic searches. However, it comes with rate limits, which restrict the number of requests you can make within a specific timeframe. If you exceed these limits, your application might get throttled or even blocked. The enterprise tier is where things get serious. This is intended for businesses and organizations that require high-volume data access, real-time streaming, and advanced analytics. It offers more extensive endpoints, higher rate limits, and dedicated support. The pricing for this tier is usually custom, depending on your specific needs and usage. You’ll need to contact Twitter directly to discuss your requirements and get a quote. It’s also worth noting that Twitter occasionally introduces new tiers or modifies the existing ones, so it’s always a good idea to check their official developer documentation for the most up-to-date information. Keep an eye on any announcements from Twitter’s developer relations team, as they often provide insights into pricing changes and new features.\n\nFactors Influencing the Cost of Twitter API\n\nThe cost of accessing the Twitter API isn’t just about picking a tier; several factors can influence how much you end up paying. Let’s dive into some of these key elements. Data volume is a big one. The more data you pull from Twitter, the more you’re likely to pay. This is especially true if you’re using the API for large-scale data analysis or monitoring. Different tiers offer varying levels of data access, and exceeding those limits can lead to additional charges. Rate limits also play a crucial role. Each API endpoint has a rate limit, which determines how many requests you can make within a specific time window. If your application needs to make frequent requests, you’ll need a tier that offers higher rate limits, which usually comes at a higher cost. The specific endpoints you need access to can also affect the price. Some endpoints, such as those that provide real-time streaming data or historical data, might be considered premium and require a higher-tier subscription. Your intended use case matters too. Twitter might offer different pricing structures for different types of applications. For example, academic researchers might be eligible for discounted rates or special access programs. Finally, keep in mind that Twitter can change its pricing policies at any time. It’s essential to stay updated with the latest announcements and documentation to avoid any surprises. Regularly reviewing your usage and optimizing your API calls can also help you manage your costs effectively. So, before you start building your application, take the time to carefully assess your data needs, rate limit requirements, and the specific endpoints you’ll be using. This will help you choose the right tier and avoid overpaying.\n\nHow to Check Twitter API Pricing\n\nOkay, so you’re ready to figure out exactly how much the Twitter API will cost you? Here’s a step-by-step guide to checking the current pricing and understanding what you’ll be paying for. First off, head over to the Twitter Developer Platform website. This is your go-to resource for all things API-related. Look for the “Pricing” or “Plans” section. It’s usually located in the navigation menu or within the developer documentation. Once you find the pricing page, you’ll see a breakdown of the different tiers available. Each tier should list its features, rate limits, and, of course, the price. Take your time to compare the tiers and see which one best fits your needs. If you have specific requirements that aren’t covered by the standard tiers, you might need to contact Twitter’s sales team directly. They can provide custom pricing options tailored to your use case. To do this, look for a “Contact Sales” or “Get a Quote” link on the pricing page. When you reach out to sales, be prepared to provide detailed information about your project, including the expected data volume, rate limit requirements, and the specific endpoints you’ll be using. This will help them provide an accurate quote. Also, don’t forget to check the fine print. Look for any hidden fees or additional charges that might apply. For example, some tiers might charge extra for exceeding rate limits or accessing premium endpoints. Finally, stay updated with any announcements from Twitter regarding pricing changes. They often announce these changes on their developer blog or through their official Twitter account. By following these steps, you’ll be well-equipped to understand the Twitter API pricing and make an informed decision about which tier is right for you.\n\nAlternatives to Paid Twitter API Access\n\nAlright, so the Twitter API pricing might be a bit of a buzzkill. But don’t throw in the towel just yet! There are a few alternative routes you can explore if you’re looking to minimize costs or avoid paying altogether. One option is to explore open-source libraries and tools. These can sometimes provide access to Twitter data without directly using the official API. However, keep in mind that these tools might have limitations and may not be as reliable as the official API. Another approach is to use third-party APIs or data providers. These services often offer aggregated Twitter data at a lower cost than the official API. They might scrape Twitter data or use other methods to collect and provide the information you need. Just be sure to check the terms of service and ensure that you’re complying with Twitter’s policies. For academic research, Twitter sometimes offers special access programs or discounted rates. If you’re a researcher, it’s worth exploring these options. You might be able to get access to the API for free or at a reduced cost. If your needs are very limited, you might be able to get by with the basic free access that Twitter provides. This might be enough for small personal projects or simple tasks. However, be aware that the free tier has significant limitations and might not be suitable for anything beyond basic usage. Finally, consider whether you really need real-time data. If you can get by with historical data, you might be able to find datasets or archives that are available for free or at a lower cost. By exploring these alternatives, you might be able to find a solution that fits your budget and meets your needs. Just be sure to do your research and understand the limitations of each option before making a decision.\n\nTips for Minimizing Twitter API Costs\n\nOkay, so you’ve decided to use the Twitter API, but you want to keep those costs as low as possible? Smart move! Here are some practical tips to help you minimize your expenses. First and foremost, optimize your API requests. Only request the data you actually need. The more data you request, the more you’re likely to pay. Use the API’s filtering and pagination options to narrow down your results and avoid unnecessary data transfer. Cache your data whenever possible. If you’re repeatedly requesting the same data, store it locally and only update it periodically. This will reduce the number of API calls you need to make and save you money. Monitor your API usage regularly. Keep an eye on how many requests you’re making and identify any areas where you can optimize. Twitter provides usage dashboards and analytics tools that can help you track your API consumption. Implement error handling and retry mechanisms. If your application encounters errors, don’t just keep retrying the same request. Implement exponential backoff to avoid overwhelming the API and incurring unnecessary charges. Use webhooks instead of polling. Webhooks allow Twitter to push data to your application in real-time, rather than you having to constantly poll the API. This can significantly reduce the number of requests you need to make. Consider using compression to reduce the size of the data you’re transferring. This can help you save on bandwidth costs and improve the performance of your application. Review your code regularly to identify and fix any inefficient API calls. Even small optimizations can add up over time and save you a significant amount of money. Finally, stay updated with Twitter’s API documentation and best practices. By following these tips, you can significantly reduce your Twitter API costs and make your application more efficient.\n\nConclusion: Is the Twitter API Worth the Cost?\n\nSo, we’ve covered a lot about the Twitter API and its costs. The big question remains: is it worth the investment? Well, like most things, it depends. If you’re a business that relies on real-time Twitter data for marketing, customer service, or data analysis, then the API is likely worth the cost. It provides access to valuable insights and allows you to automate tasks that would otherwise be time-consuming and expensive. For researchers, the Twitter API can be a valuable tool for studying social trends, public opinion, and more. While the costs can be a barrier, the insights gained can often justify the investment. If you’re a hobbyist or developer working on a personal project, the decision is a bit more nuanced. You’ll need to carefully weigh the costs against the benefits and consider whether there are any alternative solutions that might meet your needs. Ultimately, the value of the Twitter API depends on your specific goals, budget, and technical expertise. If you’re willing to invest the time and effort to optimize your API usage and explore alternative solutions, you can often find a way to make it work for you. Just remember to stay informed about Twitter’s pricing policies and best practices, and don’t be afraid to experiment and iterate. By carefully considering all these factors, you can make an informed decision about whether the Twitter API is the right choice for you. Remember to always check the most recent data available on the X developer platform for the most accurate information. Have fun!",
    "link": "https://cbconnect-api-dev.resultsathand.com/tech-signal/twitter-api-cost-is-access-free-or-paid-1764797574",
    "snippet": "Let's break it down in a way that's easy to understand. The short answer is that it depends on your usage and the level of access you need.",
    "title": "Twitter API Cost: Is Access Free Or Paid? - Resultsathand"
  },
  {
    "content_readable": "This is part one of the Advanced Use Cases series:\n\n1️⃣ Extract Metadata from Queries to Improve Retrieval\n\n2️⃣ Query Expansion\n\n3️⃣ Query Decomposition\n\n4️⃣ Automated Metadata Enrichment\n\nSometimes a single question is multiple questions in disguise. For example: “Did Microsoft or Google make more money last year?”. To get to the correct answer for this seemingly simple question, we actually have to break it down: “How much money did Google make last year?” and “How much money did Microsoft make last year?”. Only if we know the answer to these 2 questions can we reason about the final answer.\n\nThis is where query decomposition comes in. This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach:\n\nDecompose the original question into smaller questions that can be answered independently to each other. Let’s call these ‘sub questions’ here on out.\nReason about the final answer to the original question, based on each sub-answer.\n\nWhile for many query/dataset combinations, this may not be required, for some, it very well may be. At the end of the day, often one query results in one retrieval step. If within that one single retrieval step we are unable to have the retriever return both the money Microsoft made last year and Google, then the system will struggle to produce an accurate final response.\n\nThis method ensures that we are:\n\nretrieving the relevant context for each sub question.\nreasoning about the final answer given each answer based on the contexts retrieved for each sub question.\n\nIn this article, I’ll be going through some key steps that allow you to achieve this. You can find the full working example and code in the linked recipe from our cookbook. Here, I’ll only show the most relevant parts of the code.\n\n🚀 I’m sneaking something extra into this article. I saw the opportunity to try out the structured output functionality (currently in beta) by OpenAI to create this example. For this step, I extended the OpenAIGenerator in Haystack to be able to work with Pydantic schemas. More on this in the next step.\n\nLet’s try build a full pipeline that makes use of query decomposition and reasoning. We’ll use a dataset about Game of Thrones (a classic for Haystack) which you can find preprocessed and chunked on Tuana/game-of-thrones on Hugging Face Datasets.\n\nDefining our Questions Structure\n\nOur first step is to create a structure within which we can contain the subquestions, and each of their answers. This will be used by our OpenAIGenerator to produce a structured output.\n\nfrom pydantic import BaseModel\n\nclass Question(BaseModel):\n    question: str\n    answer: Optional[str] = None\n\nclass Questions(BaseModel):\n    questions: list[Question]\n\n\nThe structure is simple, we have Questions made up of a list of Question. Each Question has the question string as well as an optional answer to that question.\n\nDefining the Prompt for Query Decomposition\n\nNext up, we need to get an LLM to decompose a question and produce multiple questions. Here, we will start making use of our Questions schema.\n\nsplitter_prompt = \"\"\"\nYou are a helpful assistant that prepares queries that will be sent to a search component.\nSometimes, these queries are very complex.\nYour job is to simplify complex queries into multiple queries that can be answered\nin isolation to eachother.\n\nIf the query is simple, then keep it as it is.\nExamples\n1. Query: Did Microsoft or Google make more money last year?\n   Decomposed Questions: [Question(question='How much profit did Microsoft make last year?', answer=None), Question(question='How much profit did Google make last year?', answer=None)]\n2. Query: What is the capital of France?\n   Decomposed Questions: [Question(question='What is the capital of France?', answer=None)]\n3. Query: {{question}}\n   Decomposed Questions:\n\"\"\"\n\nbuilder = PromptBuilder(splitter_prompt)\nllm = OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions})\n\n\nAnswering Each Sub Question\n\nFirst, let’s build a pipeline that uses the splitter_prompt to decompose our question:\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question}})\nprint(result[\"llm\"][\"structured_reply\"])\n\n\nThis produces the following Questions (List[Question])\n\nquestions=[Question(question='How many siblings does Jamie have?', answer=None), \n           Question(question='How many siblings does Sansa have?', answer=None)]\n\n\nNow, we have to fill in the answer fields. For this step, we need to have a separate prompt and two custom components:\n\nThe CohereMultiTextEmbedder which can take multiple questions rather than a single one like the CohereTextEmbedder.\nThe MultiQueryInMemoryEmbeddingRetriever which can again, take multiple questions and their embeddings, returning question_context_pairs. Each pair contains the question and documents that are relevant to that question.\n\nNext, we need to construct a prompt that can instruct a model to answer each subquestion:\n\nmulti_query_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nAnd you have split the task into the following questions:\n{% for pair in question_context_pairs %}\n  {{pair.question}}\n{% endfor %}\n\nHere are the question and context pairs for each question.\nFor each question, generate the question answer pair as a structured output\n{% for pair in question_context_pairs %}\n  Question: {{pair.question}}\n  Context: {{pair.documents}}\n{% endfor %}\nAnswers:\n\"\"\"\n\nmulti_query_prompt = PromptBuilder(multi_query_template)\n\n\nLet’s build a pipeline that can answer each individual sub question. We will call this the query_decomposition_pipeline :\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\nquery_decomposition_pipeline.add_component(\"embedder\", CohereMultiTextEmbedder(model=\"embed-multilingual-v3.0\"))\nquery_decomposition_pipeline.add_component(\"multi_query_retriever\", MultiQueryInMemoryEmbeddingRetriever(InMemoryEmbeddingRetriever(document_store=document_store)))\nquery_decomposition_pipeline.add_component(\"multi_query_prompt\", PromptBuilder(multi_query_template))\nquery_decomposition_pipeline.add_component(\"query_resolver_llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"embedder.embeddings\", \"multi_query_retriever.query_embeddings\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"multi_query_retriever.queries\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"multi_query_retriever.question_context_pairs\", \"multi_query_prompt.question_context_pairs\")\nquery_decomposition_pipeline.connect(\"multi_query_prompt\", \"query_resolver_llm\")\n\n\nRunning this pipeline with the original question “Who has more siblings, Jamie or Sansa?”, results in the following structured output:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question}})\n\nprint(result[\"query_resolver_llm\"][\"structured_reply\"])\n\n\nquestions=[Question(question='How many siblings does Jamie have?', answer='2 (Cersei Lannister, Tyrion Lannister)'),\n           Question(question='How many siblings does Sansa have?', answer='5 (Robb Stark, Arya Stark, Bran Stark, Rickon Stark, Jon Snow)')]\n\n\nReasoning About the Final Answer\n\nThe final step we have to take is to reason about the ultimate answer to the original question. Again, we create a prompt that will instruct an LLM to do this. Given we have the questions output that contains each sub question and answer, we will make these inputs to this final prompt.\n\nreasoning_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nYou have split this question up into simpler questions that can be answered in\nisolation.\nHere are the questions and answers that you've generated\n{% for pair in question_answer_pair %}\n  {{pair}}\n{% endfor %}\n\nReason about the final answer to the original query based on these questions and\naswers\nFinal Answer:\n\"\"\"\n\nresoning_prompt = PromptBuilder(reasoning_template)\n\n\nTo be able to augment this prompt with the question answer pairs, we will have to extend our previous pipeline and connect the structured_reply from the previous LLM, to the question_answer_pair input of this prompt.\n\nquery_decomposition_pipeline.add_component(\"reasoning_prompt\", PromptBuilder(reasoning_template))\nquery_decomposition_pipeline.add_component(\"reasoning_llm\", OpenAIGenerator(model=\"gpt-4o-mini\"))\n\nquery_decomposition_pipeline.connect(\"query_resolver_llm.structured_reply\", \"reasoning_prompt.question_answer_pair\")\nquery_decomposition_pipeline.connect(\"reasoning_prompt\", \"reasoning_llm\")\n\n\nNow, let’s run this final pipeline and see what results we get:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question},\n                                           \"reasoning_prompt\": {\"question\": question}},\n                                           include_outputs_from=[\"query_resolver_llm\"])\n\nprint(\"The original query was split and resolved:\\n\")\n\nfor pair in result[\"query_resolver_llm\"][\"structured_reply\"].questions:\n  print(pair)\nprint(\"\\nSo the original query is answered as follows:\\n\")\nprint(result[\"reasoning_llm\"][\"replies\"][0])\n\n\n🥁 Drum roll please:\n\nThe original query was split and resolved:\n\nquestion='How many siblings does Jaime have?' answer='Jaime has one sister (Cersei) and one younger brother (Tyrion), making a total of 2 siblings.'\nquestion='How many siblings does Sansa have?' answer='Sansa has five siblings: one older brother (Robb), one younger sister (Arya), and two younger brothers (Bran and Rickon), as well as one older illegitimate half-brother (Jon Snow).'\n\nSo the original query is answered as follows:\n\nTo determine who has more siblings between Jaime and Sansa, we need to compare the number of siblings each has based on the provided answers.\n\nFrom the answers:\n- Jaime has 2 siblings (Cersei and Tyrion).\n- Sansa has 5 siblings (Robb, Arya, Bran, Rickon, and Jon Snow).\n\nSince Sansa has 5 siblings and Jaime has 2 siblings, we can conclude that Sansa has more siblings than Jaime.\n\nFinal Answer: Sansa has more siblings than Jaime.\n\n\nWrapping up\n\nGiven the right instructions, LLMs are good at breaking down tasks. Query decomposition is a great way we can make sure we do that for questions that are multiple questions in disguise.\n\nIn this article, you learned how to implement this technique with a twist 🙂 Let us know what you think about using structured outputs for these sorts of use cases. And check out the Haystack experimental repo to see what new features we’re working on.",
    "link": "https://haystack.deepset.ai/blog/query-decomposition",
    "snippet": "This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach.",
    "title": "Advanced RAG: Query Decomposition \u0026 Reasoning - Haystack"
  },
  {
    "content_readable": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and relative date parameters that are recognized in search queries.\n\nSeveral references on this page are not available in Simple Search. Switch to Advanced Search to access them.\n\nIssue Attributes\n\nEvery issue has base attributes that are set automatically by YouTrack. These include the issue ID, the user who created or applied the last update to the issue, and so on.\n\nThese search attributes represent an \u003cAttribute\u003e in the Search Query Grammar. Their values correspond to the \u003cValue\u003e or \u003cValueRange\u003e parameter.\n\nAttribute-based search uses the syntax attribute: value.\n\nYou can specify multiple values for the target attribute, separated by commas.\n\nExclude specific values from the search results with the syntax attribute: -value.\n\nIn many cases, you can omit the attribute and reference values directly with the # or - symbols. For additional guidelines, see Advanced Search.\n\nattachment text\n\nattachment text: \u003ctext\u003e\n\nReturns issues that include image attachments with the specified text.\n\nattachments\n\nattachments: \u003ctext\u003e\n\nReturns issues that include attachments with the specified filename.\n\nBoard\n\nBoard \u003cboard name\u003e: \u003csprint name\u003e\n\nReturns issues that are assigned to the specified sprint on the specified agile board. To find issues that are assigned to agile boards with sprints disabled, use has: \u003cboard name\u003e.\n\ncc recipients\n\ncc recipients: \u003cuser\u003e\n\nReturns tickets where the specified users are added as CCs.\n\ncode\n\ncode: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words that are formatted as code in the issue description or comments. This includes matches that are formatted as inline code spans, indented and fenced code blocks, and stack traces.\n\ncommented: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues to which comments were added on the specified date or within the specified period.\n\ncommenter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were commented by the specified user or by a member of the specified group.\n\ncomments: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in a comment.\n\ncreated\n\ncreated: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that were created on a specific date or within a specified time frame.\n\ndescription\n\ndescription: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in the issue description.\n\ndocument type\n\ndocument type: Issue | Ticket\n\nReturns either issue or ticket type documents.\n\nGantt\n\nGantt: \u003cchart name\u003e\n\nReturns issues that are assigned to the specified Gantt chart.\n\nhas\n\nhas: \u003cattribute\u003e\n\nThe has keyword functions as a Boolean search term. When used in a search query, it returns all issues that contain a value for the specified attribute. Use the minus operator (-) before the specified attribute to find issues that have empty values.\n\nFor example, to find all issues in the TST project that are assigned to the current user, have a duplicates link, have attachments, but do not have any comments, enter in: TST for: me has: duplicates , attachments , -comments.\n\nYou can use the has keyword in combination with the following attributes:\n\nAttribute\n\nDescription\n\nattachments\n\nReturns issues that have attachments.\n\nboards\n\nReturns issues that are assigned to at least one agile board. When used with an exclusion operator (-), returns issues that aren't assigned to any boards.\n\nBoard \u003cboard name\u003e\n\nReturns issues that are assigned to the specified agile board.\n\ncomments\n\nReturns issues that have one or more comments.\n\ndescription\n\nReturns issues that do not have an empty description.\n\n\u003cfield name\u003e\n\nReturns issues that contain any value in the specified custom field. Enclose field names that contain spaces in braces.\n\nGantt\n\nReturns issues that are assigned to any Gantt chart.\n\n\u003clink type name\u003e\n\nReturns issues that have links that match the specified outward name or inward name. Enclose link names that contain spaces in braces.\n\nFor example, to find issues that are linked as subtasks to parent issues, use:\n\nhas: {Subtask of}\n\nTo find issues that aren't linked to a parent issue, use:\n\nhas: -{Subtask of}\n\nlinks\n\nReturns issues that have any issue link type.\n\nstar\n\nReturns issues that have the star tag for the current user.\n\nunderestimation\n\nReturns issues where the total spent time is greater than the original estimation value.\n\nvcs changes\n\nReturns issues that contain vcs changes.\n\nvotes\n\nReturns issues that have one or more votes.\n\nwork\n\nReturns issues that have one or more work items.\n\nissue ID\n\nissue ID: \u003cissue ID\u003e, #\u003cissue ID\u003e\n\nReturns an issue that matches the specified issue ID. This attribute can also be referenced as a single value with the syntax #\u003cissue ID\u003e or -\u003cissue ID\u003e. When the search returns a single issue, the result is displayed in single issue view.\n\nIf you don't use the syntax for an attribute-based search (issue ID: \u003cvalue\u003e or #\u003cvalue\u003e), the input is also parsed as a text search. In addition to any issue that matches the specified issue ID, the search results include any issue that contains the specified ID in any text attribute.\n\nIf you set the issue ID in quotes, the input is only parsed as a text search. The search results only include issues that contain the specified ID in a text attribute.\n\nNote that even when an issue ID is parsed as a text search, the results do not include issue links. To find issues based on issue links, use the links attribute or reference a specific link type.\n\nlinks\n\nlinks: \u003cissue ID\u003e\n\nReturns all issues that contain links to the specified issue.\n\nlooks like\n\nlooks like: \u003cissue ID\u003e\n\nReturns issues in which the issue summary or description contains words that are found in the issue summary or description in the specified issue. Issues that contain matching words in the issue summary are given higher weight when the search results are sorted by relevance.\n\nmentioned in\n\nmentioned in: \u003cissue id\u003e\n\nReturns issues with issue IDs referenced in the description or a comment of the target issue. Issue IDs in supplemental text fields aren't included in the search results.\n\nmentions\n\nmentions: \u003cissue id\u003e, \u003cuser\u003e\n\nReturns issues that contain either @mention for the specified user or issue IDs referenced in the description or a comment. User mentions and issue IDs in supplemental text fields aren't included in the search results.\n\norganization\n\norganization: \u003corganization name\u003e\n\nReturns issues that belong to the specified organization. This attribute can also be referenced as a single value.\n\nproject\n\nproject: \u003cproject name\u003e | \u003cproject ID\u003e\n\nReturns issues that belong to the specified project. This attribute can also be referenced as a single value.\n\nreporter\n\nreporter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues and tickets that were created by the specified user or a member of the specified group, including tickets created on behalf of the specified user. Use me to return issues that were created by the current user.\n\nresolved date\n\nresolved date: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that were resolved on a specific date or within a specified time frame.\n\nsaved search\n\nsaved search: \u003csaved search name\u003e\n\nReturns issues that match the search criteria of a saved search. This attribute can also be referenced as a single value with the syntax #\u003csaved search name\u003e or -\u003csaved search name\u003e.\n\nsubmitter\n\nsubmitter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were submitted by the specified user or a member of the specified group on behalf of another user. Use me to return issues that were submitted by the current user.\n\nsummary\n\nsummary: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in the issue summary.\n\ntag\n\ntag: \u003ctag name\u003e\n\nReturns issues that match a specified tag. This attribute can also be referenced as a single value with the syntax #\u003ctag name\u003e or -\u003ctag name\u003e\n\nupdated\n\nupdated: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues where the most recent change occurred on a specific date or within a specified time frame.\n\nupdater\n\nupdater: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were last updated by the specified user or a member of the specified group. Use me to return issues to which you applied the last update.\n\nvcs changes\n\nvcs changes: \u003ccommit hash\u003e\n\nReturns issues that contain vcs changes that were applied in the commit object that is identified by the specified SHA-1 commit hash.\n\nvisible to\n\nvisible to: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that are visible to the specified user or a member of the specified group.\n\nvoter\n\nvoter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that have votes from the specified user or a member of the specified group.\n\nCustom Fields\n\nYou can find issues that are assigned specific values in a custom field. As with other issue attributes, you use the syntax attribute: value or attribute: -value. In this case, the attribute is the name of the custom field. In most cases, you can reference values directly with the # or - symbols.\n\nFor custom fields that are assigned an empty value, you can reference this property as a value. For example, to search for issues that are not assigned to a specific user, enter Assignee: Unassigned or #Unassigned. If the field is not assigned an empty value, find issues that do not store a value in the field with the syntax \u003cfield name\u003e: {No \u003cfield name\u003e} or has: -\u003cfield name\u003e.\n\nThis section lists the search attributes for default custom fields. Note that default fields and their values can be customized. The actual field names, values, and aliases may vary.\n\nAffected versions\n\nAffected versions: \u003cvalue\u003e\n\nReturns issues that were detected in a specific version of the product.\n\nAssignee\n\nAssignee: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns all issues that are assigned to the specified user or a member of the specified group.\n\nFix versions\n\nFix versions: \u003cvalue\u003e\n\nReturns issues that were fixed in a specific version of the product.\n\nFixed in build\n\nFixed in build: \u003cvalue\u003e\n\nReturns issues that were fixed in the specified build.\n\nPriority\n\nPriority: \u003cvalue\u003e\n\nReturns issues that match the specified priority level.\n\nState\n\nState: \u003cvalue\u003e | Resolved | Unresolved\n\nReturns issues that match the specified state.\n\nThe Resolved and Unresolved states cannot be assigned to an issue directly, as they are properties of specific values that are stored in the State field.\n\nBy default, Fixed, Won't fix, Duplicate, Incomplete, Obsolete, and Can't reproduce states are set as Resolved.\n\nThe Submitted, Open, In Progress, Reopened, and To be discussed states are set as Unresolved.\n\nSubsystem\n\nSubsystem: \u003cvalue\u003e\n\nReturns issues that are assigned to a specific subsystem within a project.\n\nType\n\nType: \u003cvalue\u003e\n\nReturns issues that match the specified issue type.\n\nIssue Links\n\nYou can search for issues based on the links that connect them to other issues. Search queries that reference a specific issue link type can be interpreted in two different ways:\n\nWhen specified as \u003clink type\u003e: \u003cissue ID\u003e, the query returns issues linked to the specified issue using this link type.\n\nUsing \u003clink type\u003e: (\u003csub-query\u003e), the query returns issues linked to any issue that matches the specified sub-query using this link type.\n\nWhen searching for linked issues, you can enter the outward name or inward name of any issue link type, then specify your search criteria.\n\nThis list contains search parameters for issue link types that are provided by default in YouTrack. The default issue link types can be customized, which means that the actual names may vary. You can also use this syntax to build search queries that refer to custom link types.\n\nlinks\n\nlinks: \u003cissue ID\u003e\n\nReturns issues that are linked to a target issue.\n\naggregate\n\naggregate \u003caggregation link type\u003e: \u003cissue ID\u003e\n\nReturns issues that are indirectly linked to a target issue. Use this search term to find, for example, issues that are parent issues for a parent issue or subtasks of issues that are also subtasks of a target issue. The results include any issue that is linked to the target issue using the specified link type, whether directly or indirectly.\n\nThis search argument is only compatible with aggregation link types.\n\nDepends on\n\nDepends on: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have depends on links to a target issue or any issue that matches the specified sub-query.\n\nDuplicates\n\nDuplicates: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have duplicates links to a target issue or any issue that matches the specified sub-query.\n\nIs duplicated by\n\nIs duplicated by: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have is duplicated by links to a target issue or any issue that matches the specified sub-query.\n\nIs required for\n\nIs required for: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have is required for links to a target issue or any issue that matches the specified sub-query.\n\nParent for\n\nParent for: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have parent for links to a target issue or any issue that matches the specified sub-query.\n\nRelates to\n\nRelates to: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have relates to links to a target issue or any issue that matches the specified sub-query.\n\nSubtask of\n\nSubtask of: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have subtask of links to a target issue or any issue that matches the specified sub-query.\n\nTime Tracking\n\nThere is a dedicated set of search attributes that you can use to find issues that contain time tracking data. These attributes look for specific values that have been added as work items to an issue.\n\nwork\n\nwork: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or phrase in a work item.\n\nwork author: \u003cuser\u003e\n\nReturns issues that have work items that were added by the specified user.\n\nwork type\n\nwork type: \u003cvalue\u003e\n\nReturns issues that have work items that are assigned the specified work type. The query work type: {No type} returns issues that have work items that are not assigned a work item type.\n\nwork date\n\nwork date: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that have work items that are recorded for the specified date or within the specified time frame.\n\ncustom work item attributes\n\nwork \u003cattribute name\u003e: \u003cattribute value\u003e\n\nReturns issues that have work items that are assigned the specified value for a specific work item attribute.\n\nSort Attributes\n\nYou can specify the sort order for the list of issues that are returned by the search query.\n\nYou can sort issues by any of the attributes on the following list. In the Search Query Grammar, these attributes represent the \u003cSortAttribute\u003e value.\n\nsort by\n\nsort by: \u003cvalue\u003e \u003csort order\u003e\n\nSorts issues that are returned by the query in the specified order.\n\nWhen you perform a text search, the results can be sorted by relevance. You cannot specify relevance as a sort attribute. For more information, see Sorting by Relevance.\n\nKeywords\n\nThere are a number of values that can be substituted with a keyword. When you use a keyword in a search query, you do not specify an attribute. A keyword is preceded by the number sign (#) or the minus operator. In the YouTrack Search Query Grammar, these keywords correspond to a \u003cSingleValue\u003e.\n\nme\n\nReferences the current user. This keyword can be used as a value for any attribute that accepts a user.\n\nWhen used as a single value (#me) the search returns issues that are assigned to, reported by, or commented by the current user.\n\nFor example, to find unresolved issues that are assigned to, reported by, or contain comments from the current user, enter #me -Resolved.\n\nThe results also include issues that contain references to the current user in any custom field that stores values as users. For example, you have a custom field Reviewed by that stores a user type. The search query #me -Resolved also includes issues that reference the current user in this custom field.\n\nmy\n\nAn alias for me.\n\nResolved\n\nThis keyword references the Resolved issue property. This property is set based on the current value or combination of values for any custom field that stores a state type. In the default State field, the Resolved property is enabled for the values Fixed, Won't fix, Duplicate, Incomplete, Obsolete, and Can't reproduce.\n\nFor projects that use multiple state-type fields, the Resolved property is only true when all the state-type fields are assigned values that are considered to be resolved.\n\nFor example, to find all resolved issues that were updated today, enter #Resolved updated: Today.\n\nUnresolved\n\nThis keyword references the Unresolved issue property. This property is set based on the current value or combination of values for any custom field that stores a state type. In the default State field, the Resolved property is disabled for the values Submitted, Open, In Progress, Reopened, and To be discussed.\n\nFor projects that use multiple state-type fields, the Unresolved property is true when any state-type field is assigned a value that is not considered to be resolved.\n\nFor example, to find all unresolved issues that are assigned to the user john.doe in the Test project, enter #Unresolved project: Test for: john.doe.\n\nReleased\n\nThis keyword references the Released property for values in a field that stores a version type. It can only be used together with the attribute name or alias for a version field. This means that it cannot be referenced as a single value.\n\nWith fields that store multiple values, the search query returns issues for which at least one of the versions that are stored in the field is marked as released.\n\nFor example, to find all issues in the Test project that are fixed in a version that has not yet been released, enter in: Test fixed in: -Released.\n\nArchived\n\nThis keyword references the Archived property for values in a field that stores a version type. It can only be used together with the attribute name or alias for a version field. This means that it cannot be referenced as a single value.\n\nWith fields that store multiple values, the search query only returns issues for which all the versions that are stored in the field are marked as archived.\n\nFor example, to find all issues in the Test project that are fixed in a version that has been archived, enter in: Test fixed in: Archived.\n\nOperators\n\nThe search query grammar applies default semantics to search queries that do not contain explicit logical operators.\n\nSearches that specify values for multiple attributes are treated as conjunctive. This means that the values are handled as if joined by an AND operator. For example, State: {In Progress} Priority: Critical returns issues that are assigned the specified state and priority.\n\nThis extends to queries that look for the presence or absence of a value for a specific attribute (has) in combination with a reverence to a specific value for the same attribute. The presence or absence of a value and the value itself are considered as separate attributes in the issue. For example, has: assignee Assignee: me only returns issues where the assignee is set and that assignee is you.\n\nFor text search, searches that include multiple words are treated as conjunctive. This means that the words are handled as if joined by an AND operator. For example, State: Open context usage returns issues that contain matching forms for both context and usage.\n\nSearches that include multiple values for a single attribute are treated as disjunctive. This means that the values are handled as if joined by an OR operator. For example, State: {In Progress}, {To be discussed} returns issues that are assigned either one or the other of these two states.\n\nYou can override the default semantics by applying explicit operators to the query.\n\nand\n\nThe AND operator combines matches for multiple search attributes to narrow down the search results. When you join search arguments with the AND operator, the resulting issues must contain matches for all the specified attributes. Use this operator for issue fields that store enum[*] types and tags.\n\nSearch arguments that are joined with an AND operator are always processed as a group and have a higher priority than other arguments that are joined with an OR operator in the query.\n\nHere are a few examples of search queries that contain AND operators:\n\nTo find issues in the Ktor project that are tagged as both Next build and to be tested, enter:\n\nin: Ktor and tag: {Next build} and tag: {to be tested}\n\nThe AND operator between the two tags ensures that the results only contain issues that have both tags.\n\nTo find all issues that are set as Critical priority in the Ktor project or are set as Major priority and are assigned to you in the Kotlin project, enter:\n\nin: Ktor #Critical or in: Kotlin #Major and for: me\n\nIf you were to remove the operators in this query, the references to the project and priority are parsed as disjunctive (OR) statements. The reference to the assignee (me) is then joined with a conjunctive (AND) statement. Instead of getting critical issues in the Ktor project plus a list of major-priority issues that you are assigned in Kotlin, you would only issues that are assigned to you that are either major or critical in either Ktor or Kotlin.\n\nor\n\nThe OR operator combines matches for multiple search attribute to broaden the search results.\n\nThis is very useful when searching for a term which has a synonym that might be used in an issue instead. For example, a search for lesson OR tutorial returns issues that contain matching forms for either \"lesson\" or \"tutorial\". If you remove the OR operator from the query, the search is performed conjunctively, which means the result would only include issues that contain matching forms for both words.\n\nHere's another example of a search query that contains an OR operator:\n\nTo find all issues in the Ktor project that are assigned to you or are tagged as to be tested in any project, enter:\n\nin: Ktor for: me or tag: {to be tested}\n\nParentheses\n\nUsing parentheses ( and ) combines various search arguments to change the order in which the attributes and operators are processed. The part of a search query inside the parentheses has priority and is always processed as a single unit.\n\nThe most common use of parentheses is to enclose two search arguments that are separated by an OR operator and further restrict the search results by joining additional search arguments with AND operators.\n\nAny time you use parentheses in a search query, you need to provide all the operators that join the parenthetical statement to neighboring search arguments. For example, the search query in: Kotlin #Critical (in: Ktor and for:me) cannot be processed. It must be written as in: Kotlin #Critical or (in: Ktor and for:me) instead.\n\nHere's an example of a search query that uses parentheses:\n\nTo find all issues that are assigned to you and are either assigned Critical priority in the Kotlin project or are assigned Major priority in the Ktor project, enter:\n\n(in: Kotlin #Critical or in: Ktor #Major) and for: me\n\nSymbols\n\nThe following symbols can be used to extend or refine a search query.\n\nSymbol\n\nDescription\n\nExamples\n\n-\n\nExcludes a subset from a set of search query results. When you use this symbol with a single value, do not use the number sign.\n\nTo find all unresolved issues except for issues with minor priority and sort the list of results by priority in ascending order, enter #unresolved -minor sort by: priority asc.\n\n#\n\nIndicates that the input represents a single value.\n\nTo find all unresolved issues in the MRK project that were reported by, assigned to, or commented by the current user, enter #my #unresolved in: MRK.\n\n,\n\nSeparates a list of values for a single attribute. Can be used in combination with a range.\n\nTo find all issues assigned to, reported or commented by the current user, which were created today or yesterday, enter #my created: Today, Yesterday.\n\n..\n\nDefines a range of values. Insert this symbol between the values that define the upper and lower ranges. The search results include the upper and lower bounds.\n\nTo find all issues fixed in version 1.2.1 and in all versions from 1.3 to 1.5, enter fixed in: 1.2.1, 1.3 .. 1.5.\n\nTo find all issues created between March 10 and March 13, 2018, enter created: 2018-03-10 .. 2018-03-13.\n\n*\n\nWildcard character. Its behavior is context-dependent.\n\nWhen used with the .. symbol, substitutes a value that determines the upper or lower bound in a range search. The search results are inclusive of the specified bound.\n\nWhen used in an attribute-based search, matches zero or more characters at the end of an attribute value. For more information, see Wildcards in Attribute-based Search.\n\nWhen used in text search, matches zero or more characters in a string. For more information, see Wildcards in Text Search.\n\nTo find all issues created on or before March 10, 2018, enter created: * .. 2018-03-10\n\nTo find issues that have tags that start with refactoring, enter tag: refactoring*.\n\nTo find unresolved issues that contain image attachments in PNG format, enter #Unresolved attachments: *.png.\n\n?\n\nMatches any single character in a string. You can only use this wildcard to search in attributes that store text. For more information, see Wildcards in Text Search.\n\nTo find issues that contain the words \"prioritize\" or \"prioritise\" in the issue description, enter description: prioriti?e\n\n{ }\n\nEncloses attribute values that contain spaces.\n\nTo find all issues with the Fixed state that have the tag to be tested, enter #Fixed tag: {to be tested}.\n\nDate and Period Values\n\nSeveral search attributes reference values that are stored as a date. You can search for dates as single values or use a range of values to define a period.\n\nSpecify dates in the format: YYYY-MM-DD or YYYY-MM or MM-DD. You also can specify a time in 24h format: HH:MM:SS or HH:MM. To specify both date and time, use the format: YYYY-MM-DD}}T{{HH:MM:SS. For example, the search query created: 2010-01-01T12:00 .. 2010-01-01T15:00 returns all issues that were created on 1 January 2010 between 12:00 and 15:00.\n\nPredefined Relative Date Parameters\n\nYou can also use pre-defined relative parameters to search for date values. The values for these parameters are calculated relative to the current date according to the time zone of the current user. The actual value for each parameter is shown in the query assist panel.\n\nThe following relative date parameters are supported:\n\nParameter\n\nDescription\n\nNow\n\nThe current instant.\n\nToday\n\nThe current calendar day.\n\nTomorrow\n\nThe next calendar day.\n\nYesterday\n\nThe previous calendar day.\n\nSunday\n\nThe calendar Sunday for the current week.\n\nMonday\n\nThe calendar Monday for the current week.\n\nTuesday\n\nThe calendar Tuesday for the current week.\n\nWednesday\n\nThe calendar Wednesday for the current week.\n\nThursday\n\nThe calendar Thursday for the current week.\n\nFriday\n\nThe calendar Friday for the current week.\n\nSaturday\n\nThe calendar Saturday for the current week.\n\n{Last working day}\n\nThe most recent working day as defined by the Workdays that are configured in the settings on the Time Tracking page in YouTrack.\n\n{This week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the current week.\n\n{Last week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the previous week.\n\n{Next week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the next week.\n\n{Two weeks ago}\n\nThe period from 00:00 Monday to 23:59 Sunday for the calendar week two weeks prior to the current date.\n\n{Three weeks ago}\n\nThe period from 00:00 Monday to 23:59 Sunday for the calendar week three weeks prior to the current date.\n\n{This month}\n\nThe period from the first day to the last day of the current calendar month.\n\n{Last month}\n\nThe period from the first day to the last day of the previous calendar month.\n\n{Next month}\n\nThe period from the first day to the last day of the next calendar month.\n\nOlder\n\nThe period from 1 January 1970 to the last day of the month two months prior to the current date.\n\nCustom Date Parameters\n\nIf the predefined date parameters don't help you find issues that matter most to you, define your own date range in your search query. Here are a few examples of the queries you can write with custom date parameters:\n\nFind issues that have new comments added in the last seven days:\n\ncommented: {minus 7d} .. Today\n\nFind issues that were updated in the last two hours:\n\nupdated: {minus 2h} .. *\n\nFind unresolved issues that are at least one and a half years old:\n\ncreated: * .. {minus 1y 6M} #Unresolved\n\nFind issues that are due in five days:\n\nDue Date: {plus 5d}\n\nTo define a custom time frame in your search queries, use the following syntax:\n\nTo specify dates or times in the past, use minus.\n\nTo specify dates or times in the future, use plus.\n\nSpecify the time frame as a series of whole numbers followed by a letter that represents the unit of time. Separate each unit of time with a space character. For example:\n\n2y 3M 1w 2d 12h\n\nQueries that specify hours will filter for events that took place during the specified hour. For example, if it is currently 15:35, a query that is written as created: {minus 48h} returns issues that were created two days ago, at any time between 3 and 4 PM. Meanwhile, a query that is written as created: {minus 2d} returns all issues that were created two days ago at any time between midnight and 23:59.\n\nThis level of precision only applies to hours. A query that references the unit of time as 14d returns exactly the same results as 2w.\n\nSearch queries that specify units of time shorter than one hour (minutes, seconds) are not supported.\n\nSearch Query Grammar\n\nThis page provides a BNF description of the YouTrack search query grammar.\n\n\u003cSearchRequest\u003e ::= \u003cOrExpression\u003e \u003cOrExpession\u003e ::= \u003cAndExpression\u003e ('or' \u003cAndExpression\u003e)* \u003cAndExpression\u003e ::= \u003cAndOperand\u003e ('and' \u003cAndOperand\u003e)* \u003cAndOperand\u003e ::= '('\u003cOrExpression\u003e? ')' | Term \u003cTerm\u003e ::= \u003cTermItem\u003e* \u003cTermItem\u003e ::= \u003cQuotedText\u003e | \u003cNegativeText\u003e | \u003cPositiveSingleValue\u003e | \u003cNegativeSingleValue\u003e | \u003cSort\u003e | \u003cHas\u003e | \u003cCategorizedFilter\u003e | \u003cText\u003e \u003cCategorizedFilter\u003e ::= \u003cAttribute\u003e ':' \u003cAttributeFilter\u003e (',' \u003cAttributeFilter\u003e)* \u003cAttribute\u003e ::= \u003cname of issue field\u003e \u003cAttributeFilter\u003e ::= ('-'? \u003cValue\u003e ) | ('-'? \u003cValueRange\u003e) | \u003cLinkedIssuesQuery\u003e \u003cLinkedIssuesQuery\u003e ::= ( \u003cOrExpression\u003e ) \u003cValueRange\u003e ::= \u003cValue\u003e '..' \u003cValue\u003e \u003cPositiveSingleValue\u003e ::= '#'\u003cSingleValue\u003e \u003cNegativeSingleValue\u003e ::= '-'\u003cSingleValue\u003e \u003cSingleValue\u003e ::= \u003cValue\u003e \u003cSort\u003e ::= 'sort by:' \u003cSortField\u003e (',' \u003cSortField\u003e)* \u003cSortField\u003e ::= \u003cSortAttribute\u003e ('asc' | 'desc')? \u003cHas\u003e ::= 'has:' \u003cAttribute\u003e (',' \u003cAttribute\u003e)* \u003cQuotedText\u003e ::= '\"' \u003ctext without quotes\u003e '\"' \u003cNegativeText\u003e ::= '-' \u003cQuotedText\u003e \u003cText\u003e ::= \u003ctext without parentheses\u003e \u003cValue\u003e ::= \u003cComplexValue\u003e | \u003cSimpleValue\u003e \u003cSimpleValue\u003e ::= \u003cvalue without spaces\u003e \u003cComplexValue\u003e ::= '{' \u003cvalue (can have spaces)\u003e '}'\n\nGrammar is case-insensitive.\n\nFor a complete list of search attributes, see Issue Attributes.\n\nTo see sample queries for common use cases, see Sample Search Queries.\n\n11 November 2025",
    "link": "https://www.jetbrains.com/help/youtrack/cloud/search-and-command-attributes.html",
    "snippet": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and ...",
    "title": "Search Query Reference | YouTrack Cloud Documentation - JetBrains"
  },
  {
    "content_readable": "Introduced in 2020, the GitHub user profile README allow individuals to give a long-form introduction. This multi-part tutorial explains how I setup my own profile to create dynamic content to aid discovery of my projects:\n\nwith the Liquid template engine and Shields (Part 1 of 4)\nusing GitHub's GraphQL API to query dynamic data about all my repos (keep reading below)\nfetching RSS and Social cards from third-party sites (Part 3 of 4)\nautomating updates with GitHub Actions (Part 4 of 4)\n\nYou can visit github.com/j12y to see the final result of what I came up with for my own profile page.\n\nThe GitHub Repo Gallery\n\nThe intended behavior for my repo gallery is to create something similar to pinned repositories but with a bit more visual pizzazz to identify what the projects are about.\n\nIn addition to source code, the repo can have metadata associated with it:\n\n✔️ Name of the repository\n✔️ Short description of the project\n✔️ Programming language used for the project\n✔️ List of tags / topics\n✔️ Image that can be used for social cards\n\nAbout\n\nThe About has editable fields to set the description and topics.\n\nSettings\n\nThe Settings includes a place to upload an image for social media preview cards.\n\nIf you don't set a preview card image, GitHub will generate one automatically that includes some basic profile statistics and your user profile image.\n\nGetting Started with the GitHub REST API\n\nThe way I structured this project is to build a library of any functions related to querying GitHub in src/gh.ts. I used a .env file to store my personal access (classic) token for authentication during local development.\n\n├── package.json\n├── .env\n├── src\n│   ├── app.ts\n│   ├── gh.ts\n│   └── template\n│       ├── README.liquid\n│       ├── contact.liquid\n│       └── gallery.liquid\n└── tsconfig.json\n\n\nI started by using REST endpoints with the Octokit library and TypeScript bindings.\n\n// src/gh.ts\nimport { Octokit } from 'octokit';\nimport { RestEndpointMethodTypes } from '@octokit/plugin-rest-endpoint-methods'\nconst octokit = new Octokit({ auth: process.env.TOKEN});\n\nexport class GitHub {\n    // GET /users/{user}\n    // https://docs.github.com/en/rest/users/users#get-a-user\n    async getUserDetails(user: string): Promise\u003cRestEndpointMethodTypes['users']['getByUsername']['response']['data']\u003e {\n        const { data } = await octokit.rest.users.getByUsername({\n            username: user\n        });\n\n        return data;\n    };\n}\n\n\nFrom src/app.ts I initialize the GithHub class, fetch the results, and can inspect the data being returned as a way to get comfortable with the various endpoints.\n\n// src/app.ts\nimport dotenv from 'dotenv';\nimport { GitHub } from \"./gh\";\n\nexport async function main() {\n  dotenv.config();\n  const gh = new GitHub()\n\n  const details = await gh.getUserDetails();\n  console.log(details);\n}\nmain();\n\n\nI typically get started on projects with simple tests like this to make sure all the various pieces to an integration can be configured and work together before getting too far.\n\nUse the GitHub GraphQL Endpoint\n\nTo get the data needed for the gallery layout, it would be necessary to make multiple calls to REST endpoints. In addition there is some data not yet available from the REST endpoint at all.\n\nSwitching to query using the GitHub GraphQL interface becomes helpful. This single endpoint can process a number of queries and give precise control over the data needed.\n\n💡 The GitHub GraphQL Explorer was fundamentally useful for me to get the right queries defined\n\nThis query needs authorization with the personal access token to fetch profile details about followers similar to some of the details returned from the REST endpoints.\n\n// src/gh.ts\n\nconst { graphql } = require(\"@octokit/graphql\")\n\nexport class GitHub \n    // https://docs.github.com/en/graphql\n    graphqlWithAuth = graphql.defaults({\n        headers: {\n            authorization: `token ${process.env.TOKEN}`\n        }\n    })\n\n    async getProfileOverview(name: string): Promise\u003cany\u003e {\n        const query = `\n            query getProfileOverview($name: String!) { \n                user(login: $name) { \n                    followers(first: 100) {\n                        totalCount\n                        edges {\n                            node {\n                                login\n                                name\n                                twitterUsername\n                                email\n                            }\n                        }\n                    }\n                }\n            }\n        `;\n        const params = {'name': name};\n\n        return await this.graphqlWithAuth(query, params);\n    }\n}\n\n\nThere are other resources such as Learn GraphQL if you haven't written many queries yet which explains the basics around syntax, schemas, and types.\n\nGetting used to GitHub's GraphQL schema primarily involves walking a series of edges to find linked nodes for objects of interest and their data attributes. In this case, I started by querying a user profile, finding the list of linked followers, and then inspecting their corresponding node's login, name, and email address.\n\n   ┌────────────┐\n   │    user    │\n   └─────┬──────┘\n         │\n         └──followers\n               │\n               ├─── totalCount\n               │\n               └─── edges\n                     │\n                     └── node\n\n\n\nFaceted Search by Topic Frequency\n\nI often want to find repositories by a topic. The user interface makes it easy to filter among many repositories by programming language such as python but unless you know which topics are relevant can become hit or miss. Was it nlp or nltk I used to categorize related repositories. Did I use dolby or dolbyio to identify repos I have for work projects.\n\nA faceted search that narrows down the number of matching repositories can be helpful for finding relevant projects like this. Given topics on GitHub are open-ended and not constrained to fixed values, it can be easy to accidentally categorize repos with variations like lambda and aws-lambda such that searches only identify partial results.\n\nTo address this, a GraphQL query gathering topics by frequency of usage within an organization or individual account can help with identifying the most useful topics.\n\nThe steps for this would be:\n\nQuery repository topics\nProcess results to group topics by frequency\nUse a template to render the gallery\n\n1 - Query Repository Topics\n\nI used the following GraphQL query to fetch my repositories and their corresponding topics.\n\nconst query = `\n    query getReposOverview($name: String!) {\n        user(login: $name) {\n            repositories(first: 100 ownerAffiliations: OWNER) {\n                edges {\n                    node {\n                        name\n                        url\n                        description\n                        openGraphImageUrl\n                        repositoryTopics(first: 100) {\n                            edges {\n                                node {\n                                    topic {\n                                        name\n                                    }\n                                }\n                            }\n                        }\n                        primaryLanguage {\n                            name\n                        }\n                    }\n                }\n            }\n        }\n    }\n`;\n\n\nThis query starts by filtering by user owned repositories (not counting forks) along with the metadata such as the social image.\n\n2 - Process Results and Group Topics by Frequency\n\nIterating over the results of the query the convention used was to look for anything with the topic github-gallery as something to be featured in the gallery. We also get a count of usage for each of the other topics and programming languages.\n\nvar topics: {[id: string]: number } = {};\nvar languages: {[id: string]: number } = {};\nvar gallery: {[id: string]: any } = {};\n\nconst repos = await gh.getReposOverview(user);\nfor (let repo of repos.user.repositories.edges) {\n  // Count occurrences of each topic\n  repo.node.repositoryTopics.edges.forEach((topic: any) =\u003e {\n    if (topic.node.topic.name == 'github-gallery') {\n      gallery[repo.node.name] = repo;\n    } else {\n      topics[topic.node.topic.name] = topic.node.topic.name in topics ? topics[topic.node.topic.name] + 1 : 1;\n    }\n  });\n\n  // Count and include count of language used\n  if (repo.node.primaryLanguage) {\n    languages[repo.node.primaryLanguage.name] = repo.node.primaryLanguage.name in languages ? languages[repo.node.primaryLanguage.name] + 1 : 1;\n  }\n}\n\n\n3 - Use a template to render the gallery\n\nThe topics are ordered by how often they are used. From the previous post on setting up a dynamic profile, I'm passing scope to the liquid engine for any data to be made available in a template.\n\n  // Share topics sorted by frequency of use for filtering repositories\n  // from the organization\n  scope['topics'] = Object.entries(topics).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n  scope['languages'] = Object.entries(languages).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n\n  // Gather topics across repos\n  scope['gallery'] = Object.values(gallery);\n\n\n\nThe repository page on GitHub uses query parameters to sort and filter, so items like topic:nltk can be passed directly in the URL to load a filtered view of repositories. The shields create a nice looking button for navigating to the topic, and use of icons for programming languages helps find relevant code samples.\n\n\u003cp\u003eExplore some of my projects: \u003cbr/\u003e\n{% for language in languages %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=language%3A{{language[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ language[0] }}-{{ language[1] }}-lightgrey?logo={{ language[0] }}\u0026label={{ language[0] }}\u0026labelColor=000000\" alt=\"{{ language[0] }}\"/\u003e\u003c/a\u003e {% endfor %}\n{% for topic in topics %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/static/v1?label={{topic[0]}}\u0026message={{ topic[1] }}\u0026labelColor=blue\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\n\nThe presentation includes a 3-column row in a table for displaying the metadata about each featured gallery project. This could display all repositories, but limiting to one or two rows seems sensible for managing screen space.\n\n{% for tile in gallery limit:3 %}\n\u003ctd width=\"25%\" valign=\"top\" style=\"padding-top: 20px; padding-bottom: 20px; padding-left: 30px; padding-right: 30px;\"\u003e\n\u003ca href=\"{{ tile.node.url }}\"\u003e\u003cimg src=\"{{ tile.node.openGraphImageUrl }}\"/\u003e\u003c/a\u003e\n\u003cp\u003e\u003cb\u003e\u003ca href=\"{{ tile.node.url }}\"\u003e{{ tile.node.name }}\u003c/b\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e{{ tile.node.description }}\u003cbr/\u003e\n{% for topic in tile.node.repositoryTopics.edges %} \u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic.node.topic.name }}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ topic.node.topic.name | replace: \"-\", \"--\" }}-blue?style=pill\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\u003c/td\u003e\n{% endfor %}\n\n\nWith all of that put together, we now have a gallery that displays a picture along with the name, description, and tags. The picture can highlight a user interface, architectural diagram, or some other branded visual to help identify the purpose of the project visually.\n\nWe can also use this to maintain our list of topics and make finding relevant topics for an audience easier to discover.\n\nLearn more\n\nI hope this overview helps with getting yourself sorted. The next article will dive into some of the other ways of aggregating content.\n\nFetching RSS and Social Cards for GitHub Profile (Part 3 of 4)\nAutomating GitHub Profile Updates with Actions (Part 4 of 4)\n\nDid this help you get your own profile started? Let me know and follow to get notified about updates.",
    "link": "https://dev.to/j12y/query-github-repo-topics-using-graphql-35ha",
    "snippet": "Creating a customized user profile page for GitHub to showcase work projects and make navigation to relevant topics easier.",
    "title": "Query GitHub Repo Topics Using GraphQL - DEV Community"
  },
  {
    "content_readable": "Updated\n\n4 days ago\n\nWith millions of conversations happening all over the web each day, it can be a long and tedious task trying to get more relevant mentions and tighten the scope of your query, but with the help of Advanced Topic Query, it can be at your fingertips.\n\nIn Social Listening, you have the option to create an advanced query that is not limited to ANY, ALL, or NONE formatting of query building. Advanced query builder can be used to form complex text queries which are not possible with a normal query builder.\n\nWhat is an Advanced Topic Query?\n\nAdvanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more.\n\nBy using advanced query you can pinpoint relevant information which is not possible with basic topic query.\n\nIt gives you the power to find the needle in a haystack.\n\n​\n\nBasic Topic Query v/s Advanced Topic Query\n\nWith more operators to use you can fetch conversations by language, geography, social media channel, volume, author, #listening, @account monitoring, user segment, and much more, it can give you access to more actionable insights.\n\nIn Basic Query, you can only use boolean operators like OR/ NOT/ AND/ along with NEAR. On the other hand, in Advanced Topic Query, it gives you access to use OR with/ inside AND, NOT (nested and within operator use cases), advanced operators, exact match operators etc.\n\nLet's see the use cases where advanced query will help in getting more insightful mentions –\n\nUse case #1: To search \"pepsi\" OR \"drink\" along with \"cups\".\n\nBasic Query\n\nAdvancd Query\n\nUse case #2: To get mentions of \"pepsi\" along with \"coke\" or \"sprite\" but not \"miranda\" with people having \"follower count\" between 100 to 1000 on \"twitter\".\n\nBasic Query\n\nAdvanced Query\n\nNot feasible in the basic Topic query\n\nThis is where we need the advanced Topic query.​\n\nHow to create an advanced Topic query?\n\nClick the New Tab icon. Under Sprinklr Insights, Click Topics within Listening.\n\nOn the Topics window, click Add Topic in the top right corner. Fill in the required fields and click Create.\n\nIn the Setup Query tab of Create New Topic window, select Advanced Query in the query section.\n\n​\n\nType your query in the Advanced Query field with the required operators and syntax.\n\nClick Save.\n\nTip: While using Instagram as a Listening Source, be sure that your query keywords include hashtags.\n\nWhich operators to use for building Topic queries?\n\nOperators for Topic queries\n\nIn creation of advanced queries along with boolean operators OR/ AND/ NOT/ etc, Sprinklr also supports operator types –\n\nSearch Operators\n\nExact Match Operators\n\nOperators for Getting Post Replies/Comments​\n\nSprinklr provides its user edge by giving them power to use Keywords List inside advanced query along with Operators mentioned.\n\nCreate query using Topic query operators\n\nFollowing are some most used operator examples and their results –\n\nOperator\n\nExample\n\nResult\n\nhello\n\nSearch for the term \"hello\"\n\nsocial sprinklr\n\nSearch for the phrases \"social\" and \"sprinklr\"\n\n​\n\nNote: Using this will show preview but topic can not be saved as it will show error, Use \"Social Sprinklr\" or (Social AND/OR/ NOT/ NEAR Sprinklr) to eliminate error.\n\nAND\n\nsocial AND sprinklr\n\nSearch for \"social\" and \"sprinklr\" anywhere within the complete message, irrespective of keywords between them\n\nOR\n\nsocial OR sprinklr\n\nSearch for \"social\" or \"sprinklr\"\n\nNOT\n\n\"social media\" NOT \"facebook\"\n\nSearch for results that contain \"social media\" but not \"facebook\"\n\n~\n\n\"social media\"~10\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNEAR\n\nsocial NEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNote: This operator can be used with keyword lists.\n\nONEAR\n\nsocial ONEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other in an ordered way\n\nNote: This operator searches social ahead of media.\n\ntitle\n\ntitle: (\"social media\")\n\nSearch for social media in the title of the message\n\nNote: It is mostly used for News, blogs, reviews and other sites.\n\nauthor\n\nauthor: \"social_media\"\n\nFetches all the mentions from author name: social_media\n\nSome other operators which are supported by Sprinklr are –\n\nProximity: It is used to define proximity or distance between 2 keywords only, whereas, NEAR can be used to define proximity between two keywords as well as keyword lists.\n\nOnear (Ordered Near): It sets the order in which the keywords will appear. For example, Keyword-List1 ONEAR/10 Keyword-List2 will ensure keywords from Keyword-List1 appear first and then Keyword-List2 keywords will follow within space of maximum 10.\n\nStep by step guide to make advanced Topic query\n\nUse case\n\nTo write query fetching mentions of ZARA –\n\n​\n\n(# listening is used for instagram listening)\n\nGetting mention along with clothing or fashion related terms only –\n\nRemoving profanity from mention (use case specific) –\n\nRemoving profanity from mention (use case specific) –\n\nAs social media has lots of profane words you can also remove it by making a keyword list and negating it from query –\n\nFiltering Mentions in English –\n\n​\n\nApplying source input as Twitter –\n\nGetting mentions of those users which have followers between 100 to 1000 –\n\n​\n\nAdvanced example showcasing use of Topic query operators and keyword list –\n\nBest practices while using Advanced Query\n\nUse of Parentheses\n\n​Parentheses are not necessary to enclose a search query but can be useful while grouping operations together for more complex queries.\n\n​\n\nFor example, if you want to return results that mention Samsung or Apple phones, and also want to query content that mentions phones along with either Apple or Samsung, you could use parentheses around Apple and Samsung to group three keywords together, as shown below –\n\nphone AND (Apple OR Samsung)\n\n​\n\nUse of parentheses within brackets, is further explained below with an example –\n\n[(internet of things ~3) OR iot OR internetofthings) AND (robots OR robot OR #robot)] NOT [things]\n\nTip: You can also use parentheses within brackets to set off additional operations within the Advanced Query field. The end result should look similar to the result summary of a basic query, built using multiple operations within a single section.\n\n\nAs a part of the rest of the query, this will perform the following operations –\n\nSearch for posts that contain the phrase \"internet of things\" or \"#internetofthings\"\n\nFrom within those results, keep any result that also says \"robots\" or \"robot\" or \"#robot\" within three words (a proximity search) of either \"internet of things\" or \"iot\" or \"internetofthings\".\n\nDiscard any results that just have the phrase \"things\" within.\n\nParentheses nested within brackets intend to set off different operations as isolated processes. In the previous example, if you build an Advanced Query that states [(internet of things OR iot OR internet of things) AND (robots OR robot OR #robot)] your query will return results that contain ANY of the first three terms and the second three terms.\n\nHowever, if you build an Advanced Query that states [internet of things OR iot OR internet of things AND robots OR robot OR #robot], your query will return any result that contains the phrase \"internet of things\" or the word \"iot\" or the word \"robot\" or the hashtag #robot or specifically the phrase \"internet of things\" within the same message as the word \"robots\".\n\nNote:\n\nYou cannot use a \"NOT\" statement with an \"OR\" statement.\n\n\nExample:\n( social OR NOT media ) ❌\n( social NOT media ) ✅\n\n(( social OR ( media NOT facebook )) ✅\n\nWhy?\n\nQuery should not contain \"NOT\" terms in \"OR\" with other terms, \"NOT\" clauses should be used in \"AND\" with other terms, using \"NOT\" in \"OR\" will bring too much data.\n\nUse of Quotation marks\n\nQuotation marks can be used for phrases in which you are looking for an exact match of those particular words in a specific order. Using parentheses or quotation marks for single-word queries is not mandatory.\n\nUse straight quotation marks ( \" \" ) for outlining phrases within it. The use of curved quotation marks (“ ”) will not produce your desired results.\n\nParentheses are generally used to group keywords or phrases joined by one or more operators together, but with other keywords involved, parentheses and quotations would act differently. For example –\n\nVersion 1: \"Phil Schiller\" AND \"Apple Marketing\" will return results for content with the exact phrase Phil Schiller (or phil schiller) and the exact phrase Apple Marketing (or apple marketing).\n\nNote: Here exact does not mean case sensitive as in the case of exactMessage Operator.\n\nExample: exactMessage: (\"Phil Schiller\" AND \"Apple Marketing\"), which will fetch results for phrase Phil Schiller (not phil schiller) and the exact phrase Apple Marketing (not apple marketing).\n\n\nVersion 2: \"Phil Schiller\" AND (Apple OR Marketing) will return results for content with the phrase \"Phil Schiller\" (together) and at least one of the words, Apple or Marketing.\n\nHandling for Broad \u0026 Ambiguous Keywords\n\nIt is very important to not use/reduce use of broad keywords in advanced queries. Broader keywords will fetch mentions that are unrelated to topic of interest, and eventually hinder dashboard/insights\n\nFor all keywords used in an advanced topic query, ensure they are directly related to the topic of interest.\n\nIn case keywords are broad but relevant to topic, they should be tied to some relevant keywords related to that topic, by using NEAR Operators\n\nExample: Robot is an important keyword for Robot Company. However just using this keyword will fetch irrelevant keywords as it’s a broad keyword used for other entities as well (Robot Street, etc).\n\nInstead of using just Robot keyword, we should use: Robot NEAR/4 (Technology OR “machine” OR # tech OR IOT OR “Internet of things” ….)\n\nNote how keywords related to Robot are used with NEAR Operator. Related keywords could be related entities, industry keywords, parent company, country keywords, etc.\n\nFrequently asked questions\n\n​\n\nIs it compulsory to put quotation marks around phrases like \"apple music\" or can we use apple music directly?\n\nHow can I eliminate posts with many spam #’s or @’s?\n\nCan exact match or parent operators be used in advanced query?\n\nWhy am I able to see mentions in preview during making of topic but not in dashboard?\n\nDuring listening to @ mentions a lot of spam mentions are also getting tagged along, e.g. like wanting to get mentions of @tom but messages of @tom_fan56 are also coming. How to remove these irrelevant mentions?\n\nIf I write query as “tom” will it also fetch mentions such as tom_jerry / @tom / #tom ?\n\n​",
    "link": "https://www.sprinklr.com/help/articles/faqs-and-advanced-usecases/create-an-advanced-topic-query/646331628ea3c9635cf36711",
    "snippet": "Advanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more. By ...",
    "title": "‎Create an Advanced Topic Query | Sprinklr Help Center"
  },
  {
    "content_readable": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL). To learn about the query language used by Resource Graph, start with the tutorial for KQL.\n\nThis article covers the language components supported by Resource Graph:\n\nUnderstanding the Azure Resource Graph query language\n\nResource Graph tables\nExtended properties\nResource Graph custom language elements\n\nShared query syntax (preview)\nSupported KQL language elements\n\nSupported tabular/top level operators\nQuery scope\nEscape characters\nNext steps\n\nResource Graph tables\n\nResource Graph provides several tables for the data it stores about Azure Resource Manager resource types and their properties. Resource Graph tables can be used with the join operator to get properties from related resource types.\n\nResource Graph tables support the join flavors:\n\ninnerunique\ninner\nleftouter\nfullouter\n\nResource Graph table Can join other tables? Description\nAdvisorResources Yes Includes resources related to Microsoft.Advisor.\nAlertsManagementResources Yes Includes resources related to Microsoft.AlertsManagement.\nAppServiceResources Yes Includes resources related to Microsoft.Web.\nAuthorizationResources Yes Includes resources related to Microsoft.Authorization.\nAWSResources Yes Includes resources related to Microsoft.AwsConnector.\nAzureBusinessContinuityResources Yes Includes resources related to Microsoft.AzureBusinessContinuity.\nChaosResources Yes Includes resources related to Microsoft.Chaos.\nCommunityGalleryResources Yes Includes resources related to Microsoft.Compute.\nComputeResources Yes Includes resources related to Microsoft.Compute Virtual Machine Scale Sets.\nDesktopVirtualizationResources Yes Includes resources related to Microsoft.DesktopVirtualization.\nDnsResources Yes Includes resources related to Microsoft.Network.\nEdgeOrderResources Yes Includes resources related to Microsoft.EdgeOrder.\nElasticsanResources Yes Includes resources related to Microsoft.ElasticSan.\nExtendedLocationResources Yes Includes resources related to Microsoft.ExtendedLocation.\nFeatureResources Yes Includes resources related to Microsoft.Features.\nGuestConfigurationResources Yes Includes resources related to Microsoft.GuestConfiguration.\nHealthResourceChanges Yes Includes resources related to Microsoft.Resources.\nHealthResources Yes Includes resources related to Microsoft.ResourceHealth.\nInsightsResources Yes Includes resources related to Microsoft.Insights.\nIoTSecurityResources Yes Includes resources related to Microsoft.IoTSecurity and Microsoft.IoTFirmwareDefense.\nKubernetesConfigurationResources Yes Includes resources related to Microsoft.KubernetesConfiguration.\nKustoResources Yes Includes resources related to Microsoft.Kusto.\nMaintenanceResources Yes Includes resources related to Microsoft.Maintenance.\nManagedServicesResources Yes Includes resources related to Microsoft.ManagedServices.\nMigrateResources Yes Includes resources related to Microsoft.OffAzure.\nNetworkResources Yes Includes resources related to Microsoft.Network.\nPatchAssessmentResources Yes Includes resources related to Azure Virtual Machines patch assessment Microsoft.Compute and Microsoft.HybridCompute.\nPatchInstallationResources Yes Includes resources related to Azure Virtual Machines patch installation Microsoft.Compute and Microsoft.HybridCompute.\nPolicyResources Yes Includes resources related to Microsoft.PolicyInsights.\nRecoveryServicesResources Yes Includes resources related to Microsoft.DataProtection and Microsoft.RecoveryServices.\nResourceChanges Yes Includes resources related to Microsoft.Resources.\nResourceContainerChanges Yes Includes resources related to Microsoft.Resources.\nResourceContainers Yes Includes management group (Microsoft.Management/managementGroups), subscription (Microsoft.Resources/subscriptions) and resource group (Microsoft.Resources/subscriptions/resourcegroups) resource types and data.\nResources Yes The default table if a table isn't defined in the query. Most Resource Manager resource types and properties are here.\nSecurityResources Yes Includes resources related to Microsoft.Security.\nServiceFabricResources Yes Includes resources related to Microsoft.ServiceFabric.\nServiceHealthResources Yes Includes resources related to Microsoft.ResourceHealth/events.\nSpotResources Yes Includes resources related to Microsoft.Compute.\nSupportResources Yes Includes resources related to Microsoft.Support.\nTagsResources Yes Includes resources related to Microsoft.Resources/tagnamespaces.\n\nFor a list of tables that includes resource types, go to Azure Resource Graph table and resource type reference.\n\nNote\n\nResources is the default table. While querying the Resources table, it isn't required to provide the table name unless join or union are used. But the recommended practice is to always include the initial table in the query.\n\nTo discover which resource types are available in each table, use Resource Graph Explorer in the portal. As an alternative, use a query such as \u003ctableName\u003e | distinct type to get a list of resource types the given Resource Graph table supports that exist in your environment.\n\nThe following query shows a simple join. The query result blends the columns together and any duplicate column names from the joined table, ResourceContainers in this example, are appended with 1. As ResourceContainers table has types for both subscriptions and resource groups, either type might be used to join to the resource from Resources table.\n\nResources\n| join ResourceContainers on subscriptionId\n| limit 1\n\n\nThe following query shows a more complex use of join. First, the query uses project to get the fields from Resources for the Azure Key Vault vaults resource type. The next step uses join to merge the results with ResourceContainers where the type is a subscription on a property that is both in the first table's project and the joined table's project. The field rename avoids join adding it as name1 since the property already is projected from Resources. The query result is a single key vault displaying type, the name, location, and resource group of the key vault, along with the name of the subscription it's in.\n\nResources\n| where type == 'microsoft.keyvault/vaults'\n| project name, type, location, subscriptionId, resourceGroup\n| join (ResourceContainers | where type=='microsoft.resources/subscriptions' | project SubName=name, subscriptionId) on subscriptionId\n| project type, name, location, resourceGroup, SubName\n| limit 1\n\n\nNote\n\nWhen limiting the join results with project, the property used by join to relate the two tables, subscriptionId in the above example, must be included in project.\n\nExtended properties\n\nAs a preview feature, some of the resource types in Resource Graph have more type-related properties available to query beyond the properties provided by Azure Resource Manager. This set of values, known as extended properties, exists on a supported resource type in properties.extended. To show resource types with extended properties, use the following query:\n\nResources\n| where isnotnull(properties.extended)\n| distinct type\n| order by type asc\n\n\nExample: Get count of virtual machines by instanceView.powerState.code:\n\nResources\n| where type == 'microsoft.compute/virtualmachines'\n| summarize count() by tostring(properties.extended.instanceView.powerState.code)\n\n\nResource Graph custom language elements\n\nShared query syntax (preview)\n\nAs a preview feature, a shared query can be accessed directly in a Resource Graph query. This scenario makes it possible to create standard queries as shared queries and reuse them. To call a shared query inside a Resource Graph query, use the {{shared-query-uri}} syntax. The URI of the shared query is the Resource ID of the shared query on the Settings page for that query. In this example, our shared query URI is /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS. This URI points to the subscription, resource group, and full name of the shared query we want to reference in another query. This query is the same as the one created in Tutorial: Create and share a query.\n\nNote\n\nYou can't save a query that references a shared query as a shared query.\n\nExample 1: Use only the shared query:\n\nThe results of this Resource Graph query are the same as the query stored in the shared query.\n\n{{/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS}}\n\n\nExample 2: Include the shared query as part of a larger query:\n\nThis query first uses the shared query, and then uses limit to further restrict the results.\n\n{{/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS}}\n| where properties_storageProfile_osDisk_osType =~ 'Windows'\n\n\nSupported KQL language elements\n\nResource Graph supports a subset of KQL data types, scalar functions, scalar operators, and aggregation functions. Specific tabular operators are supported by Resource Graph, some of which have different behaviors.\n\nSupported tabular/top level operators\n\nHere's the list of KQL tabular operators supported by Resource Graph with specific samples:\n\nKQL Resource Graph sample query Notes\ncount Count key vaults\ndistinct Show resources that contain storage\nextend Count virtual machines by OS type\njoin Key vault with subscription name Join flavors supported: innerunique, inner, leftouter, and fullouter. Limit of three join or union operations (or a combination of the two) in a single query, counted together, one of which might be a cross-table join. If all cross-table join use is between Resource and ResourceContainers, then three cross-table join are allowed. Custom join strategies, such as broadcast join, aren't allowed. For which tables can use join, go to Resource Graph tables.\nlimit List all public IP addresses Synonym of take. Doesn't work with Skip.\nmvexpand Legacy operator, use mv-expand instead. RowLimit max of 2,000. The default is 128.\nmv-expand List Azure Cosmos DB with specific write locations RowLimit max of 2,000. The default is 128. Limit of 3 mv-expand in a single query.\norder List resources sorted by name Synonym of sort\nparse Get virtual networks and subnets of network interfaces It's optimal to access properties directly if they exist instead of using parse.\nproject List resources sorted by name\nproject-away Remove columns from results\nsort List resources sorted by name Synonym of order\nsummarize Count Azure resources Simplified first page only\ntake List all public IP addresses Synonym of limit. Doesn't work with Skip.\ntop Show first five virtual machines by name and their OS type\nunion Combine results from two queries into a single result Single table allowed: | union [kind= inner|outer] [withsource=ColumnName] Table. Limit of three union legs in a single query. Fuzzy resolution of union leg tables isn't allowed. Might be used within a single table or between the Resources and ResourceContainers tables.\nwhere Show resources that contain storage\n\nThere's a default limit of three join and three mv-expand operators in a single Resource Graph SDK query. You can request an increase in these limits for your tenant through Help + support.\n\nTo support the Open Query portal experience, Azure Resource Graph Explorer has a higher global limit than Resource Graph SDK.\n\nNote\n\nYou can't reference a table as right table multiple times, which exceeds the limit of 1. If you do so, you would receive an error with code DisallowedMaxNumberOfRemoteTables.\n\nQuery scope\n\nThe scope of the subscriptions or management groups from which resources are returned by a query defaults to a list of subscriptions based on the context of the authorized user. If a management group or a subscription list isn't defined, the query scope is all resources, and includes Azure Lighthouse delegated resources.\n\nThe list of subscriptions or management groups to query can be manually defined to change the scope of the results. For example, the REST API managementGroups property takes the management group ID, which is different from the name of the management group. When managementGroups is specified, resources from the first 10,000 subscriptions in or under the specified management group hierarchy are included. managementGroups can't be used at the same time as subscriptions.\n\nExample: Query all resources within the hierarchy of the management group named My Management Group with ID myMG.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-03-01\n\n\nRequest Body\n\n{\n  \"query\": \"Resources | summarize count()\",\n  \"managementGroups\": [\"myMG\"]\n}\n\n\nThe AuthorizationScopeFilter parameter enables you to list Azure Policy assignments and Azure role-based access control (Azure RBAC) role assignments in the AuthorizationResources table that are inherited from upper scopes. The AuthorizationScopeFilter parameter accepts the following values for the PolicyResources and AuthorizationResources tables:\n\nAtScopeAndBelow (default if not specified): Returns assignments for the given scope and all child scopes.\nAtScopeAndAbove: Returns assignments for the given scope and all parent scopes, but not child scopes.\nAtScopeAboveAndBelow: Returns assignments for the given scope, all parent scopes, and all child scopes.\nAtScopeExact: Returns assignments only for the given scope; no parent or child scopes are included.\n\nNote\n\nTo use the AuthorizationScopeFilter parameter, be sure to use the 2021-06-01-preview or later API version in your requests.\n\nExample: Get all policy assignments at the myMG management group and Tenant Root (parent) scopes.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-06-01-preview\n\n\nRequest Body Sample\n\n{\n  \"options\": {\n    \"authorizationScopeFilter\": \"AtScopeAndAbove\"\n  },\n  \"query\": \"PolicyResources | where type =~ 'Microsoft.Authorization/PolicyAssignments'\",\n  \"managementGroups\": [\"myMG\"]\n}\n\n\nExample: Get all policy assignments at the mySubscriptionId subscription, management group, and Tenant Root scopes.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-06-01-preview\n\n\nRequest Body Sample\n\n{\n  \"options\": {\n    \"authorizationScopeFilter\": \"AtScopeAndAbove\"\n  },\n  \"query\": \"PolicyResources | where type =~ 'Microsoft.Authorization/PolicyAssignments'\",\n  \"subscriptions\": [\"mySubscriptionId\"]\n}\n\n\nEscape characters\n\nSome property names, such as those that include a . or $, must be wrapped or escaped in the query or the property name is interpreted incorrectly and doesn't provide the expected results.\n\nDot (.): Wrap the property name ['propertyname.withaperiod'] using brackets.\n\nExample query that wraps the property odata.type:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.['odata.type']\n\n\nDollar sign ($): Escape the character in the property name. The escape character used depends on the shell that runs Resource Graph.\n\nBash: Use a backslash (\\) as the escape character.\n\nExample query that escapes the property $type in Bash:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.\\$type\n\n\ncmd: Don't escape the dollar sign ($) character.\n\nPowerShell: Use a backtick (`) as the escape character.\n\nExample query that escapes the property $type in PowerShell:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.`$type\n\n\nNext steps\n\nAzure Resource Graph query language Starter queries and Advanced queries.\nLearn more about how to explore Azure resources.",
    "link": "https://learn.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language",
    "snippet": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL).",
    "title": "Understanding the Azure Resource Graph query language - Microsoft"
  }
]
s4 llm_format success 2026-03-01 22:53:28 → 2026-03-01 22:54:06
Input (146258 bytes)
[
  {
    "content_readable": "Crawler is not allowed!",
    "link": "https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476",
    "snippet": "Hello X Developers, We're thrilled to officially announce the launch of our new X API Pay-Per-Use pricing model ... February 13, 2026. Announcing ...",
    "title": "Announcing the Launch of X API Pay-Per-Use Pricing"
  },
  {
    "content_readable": "Crawler is not allowed!",
    "link": "https://devcommunity.x.com/t/want-to-understand-the-pricing/256677",
    "snippet": "So the cost should be: At max, $6x0.01 + $4x0.01x2 = $0.14 (two because they each API request is capped at 1000 entries) right? But why is every ...",
    "title": "Want to understand the pricing - X API v2 - X Developer Community"
  },
  {
    "content_readable": "The X API pricing has dramatically changed since 2023 – free access is effectively gone. This complete guide covers authentication, rate limits, optimization strategies, and real-world use cases for building scalable X integrations with confidence.\n\n3 weeks ago\n\nThe X API (formerly Twitter API) has undergone dramatic changes since Elon Musk’s acquisition in 2023. What was once a free, developer-friendly platform is now a premium service with strict pricing tiers and carefully controlled access levels. For developers building bots, integrating real-time data, or creating social media management tools, understanding the current X API landscape is critical.\n\nThis comprehensive guide walks you through everything you need to know about obtaining X API credentials in 2026, understanding actual costs, and optimizing your implementation for efficiency.\n\nEssential concepts covered:\n\nHow X API pricing evolved from free to paid and the emerging pay-per-use model\nCurrent tiers breakdown and which tier fits your use case\nStep-by-step process to get your API credentials from the Developer Portal\nModern authentication methods and permission scopes\nFive proven optimization strategies to reduce costs and improve performance\n\nLet’s start by understanding where the X API fits into your development workflow and what’s currently available.\n\nThe X API Evolution: What Changed\n\nThe Twitter API has evolved dramatically over the years. Here’s the timeline of major changes:\n\nDate Event Impact on Developers\nOctober 2022 Elon Musk acquires Twitter Speculation about API changes begins\nFebruary 2023 Free API access eliminated Third-party clients (Tweetbot, Echofon) shut down; pricing becomes mandatory\nMarch 2023 Paid tiers introduced ($100, $2,500, $42,000) Entry price jumps 100x; developer ecosystem fragments\nJune 2024 Basic tier pricing doubles to $200/month Increased barrier to entry for indie developers\nOctober 2024 Official rebrand: Twitter → X All documentation and branding updated; confusing for legacy users\nNovember 2025 Pay-per-use pricing beta launches New consumption-based model with $500 developer vouchers for testing\n\nFree access became $200–$5,000/month in four years. Before planning an implementation, understand what the API actually provides and which tier matches your needs.\n\nWhat Can You Build With the X API?\n\nThe X API enables programmatic access to X’s infrastructure—from retrieving data to publishing content to automating responses. Here are the most common applications:\n\nBrand Monitoring \u0026 Social Intelligence\n\nTrack mentions, competitor activity, and trending conversations in real-time. Filtered streams deliver instant alerts when specific keywords or accounts generate activity, enabling teams to respond quickly to brand-relevant events.\n\nContent Scheduling\n\nAutomate posting schedules, manage multiple accounts from a single dashboard, and coordinate content workflows. Agencies and creators use these tools to handle dozens of X accounts without manual login-and-post cycles.\n\nWebsite Content Integration\n\nEmbed live X feeds, individual tweets, and trending topics directly into websites. Publishers keep content synchronized with live X activity without requiring manual updates or outdated embeds.\n\nData Analysis and Research\n\nAccess structured data for large-scale studies, trend analysis, and market research. The API provides historical search, engagement metrics, and user data at volumes that would be impossible to collect manually.\n\nAI \u0026 Sentiment Analysis\n\nFeed real-time X data into machine learning models, language models, and sentiment analysis systems. Applications range from audience monitoring to discourse analysis to predictive analytics.\n\nX API Pricing: The 2026 Tier System\n\nAs of today, X is testing a revolutionary pay-per-use pricing model, but the traditional tier system remains the active standard. Here’s what you need to know about both approaches.\n\n💲 Current Standard Pricing\n\nThe tiered pricing structure consists of three main tiers, each designed for different scales of usage:\n\nTier\tMonthly Cost\tAnnual Savings\tBest For\tKey Capabilities\nFree\t$0\t—\tDevelopment and testing only\t500 posts/month, read-heavy, 1 req per 24hrs on most endpoints, limited endpoint access\nBasic\t$200\t$2,100/year (12.5% savings)\tSmall projects, content monitoring, single app usage\t15,000 read requests/month, 50,000 write requests/month, standard endpoint access\nPro\t$5,000\t$54,000/year (10% savings)\tGrowing applications, full feature set, mission-critical systems\t1,000,000 read requests/month, 300,000 write requests/month, full endpoint access, priority support\nEnterprise\t$42,000+\tCustom pricing\tLarge-scale systems, dedicated infrastructure\tCustom rate limits, SLAs, dedicated support, advanced features, volumetric discounts\n\nWhile Basic is 25x cheaper ($200 vs $5,000), Pro gives you 100x more read capacity and unlocks critical features like full-archive search and real-time filtering. Most companies scale directly from Free → Basic → Pro.\n\n💢 What Changed: The Death of Free Access\n\nThe shift from free to paid access served two purposes: generating revenue from the platform’s data value, and reducing abuse. Free API access enabled spam bots, data scrapers, and malicious automation at scale.\n\nAvailable with Free Tier\n\n500 posts per calendar month (about 16-17 per day)\nRate-limited to 1 request per 24 hours on most endpoints\nNo posting, liking, or engaging – read-only access to public data only\nCannot write posts, create resources, or perform account actions\nNo access to trends, direct messaging, or advanced features\n\nReal-world impact: The Free tier is genuinely only for proof-of-concept work and local development testing. For any production application, you must budget for the Basic tier at minimum ($200/month).\n\n🔮 The New Pay-Per-Use Model (Beta)\n\nIn November 2025, X launched a closed beta for a revolutionary pricing approach: pay only for what you use. Instead of fixed monthly fees, developers in the beta pay individual prices for different API operations – similar to AWS or Google Cloud’s consumption-based billing.\n\nHow Pay-Per-Use Works\n\nThe beta pricing model assigns specific costs to each operation type. For example:\n\nReading a post costs a specific price (varies by operation)\nSearching posts costs more (higher computational load)\nCreating a post has its own rate\nAccessing trends uses a different pricing tier\nDirect messaging has separate pricing\n\nImportant Note: The pay-per-use model is in closed beta as of December 2025. Plan your implementation based on current tier pricing, but monitor the official X Developer Twitter (@XDevelopers) for announcements about broader rollout.\n\nAll developers in the closed beta receive a $500 voucher to experiment before committing to production usage.\n\nPotential Benefits Over Fixed Tiers\n\nNo payment for unused capacity (unlike fixed tier pricing)\nAbility to scale up or down without tier changes\nGranular control over spending per feature\nMore transparent cost attribution\n\nX provides an interactive API cost calculator where you can input your expected usage patterns and see exactly what you’d pay.\n\nX Authentication: How to Prove Your Identity\n\nBefore making any API request, you need to authenticate – prove to X that you’re authorized to access specific data. The X API v2 supports multiple authentication methods, each suited for different scenarios.\n\n🔐 OAuth 2.0 Authorization Code (Recommended for New Development)\n\nOAuth 2.0 is the modern standard for authentication and is recommended for all new development. It’s more secure than legacy approaches and handles both public and private user data.\n\nWhen to Use OAuth 2.0\n\nBuilding new applications from scratch\nWeb applications and mobile apps requiring user login\nAccessing private user data (private lists, draft posts)\nPerforming actions on behalf of users (posting, liking, following)\n\nHow It Works\n\nUser clicks “Sign in with X” in your application\nYour app redirects them to X’s authorization page\nUser grants permissions (you define the scopes requested)\nX returns an authorization code\nYour app exchanges the code for an access token\nYou use this token for API requests on behalf of the user\n\nRequired credentials: Client ID, Client Secret, and redirect URI (configured in your developer app settings).\n\n🔑 OAuth 1.0a User Context (Legacy, Still Supported)\n\nThis older method is still supported but not recommended for new development. OAuth 1.0a authenticates on behalf of a specific user and is primarily useful for legacy applications.\n\nPosted tweets or direct messages on a user’s behalf\nRetrieving a specific user’s private timeline\nManaging user-specific resources\n\nWhy it’s less preferred: More complex to implement, less secure than OAuth 2.0, and X is gradually moving developers toward OAuth 2.0.\n\n👥 Bearer Token (App-Only, Best for Public Data)\n\nBearer token authentication is the simplest approach for accessing public data without user context. Use this when you’re building tools that only need public information.\n\nWhen to Use\n\nSearching for public posts\nRetrieving public user profiles\nAccessing publicly available trends\nBuilding analytics tools for public content\n\nHow it works: Provide your app’s credentials (API Key and Secret), receive a Bearer Token, include the token in API request headers. No user involvement required.\n\nSecurity Best Practice: Store all credentials (API Keys, Secrets, Bearer Tokens) in environment variables or secure configuration files – never hardcode them into your application code. If credentials are exposed, regenerate them immediately in the developer portal.\n\nX API v2: Endpoints and Resource Types\n\nThe X API comes in two versions: v1.1 (legacy, no longer updated) and v2 (current standard). All new projects should use v2, which provides access to endpoints organized by resource type – Posts, Users, Trends, Engagement, and more. Each resource supports specific operations (read, create, update, delete) depending on your tier and permissions.\n\nPosts (Tweets) – The Core Resource\n\nWhat you can do: Retrieve posts, search for posts matching criteria, create new posts, delete posts, access timelines\n\nCommon endpoints:\n\nGET /2/tweets — Lookup specific posts by ID\nGET /2/tweets/search/recent — Search recent posts (last 7 days)\nPOST /2/tweets — Create a new post\nGET /2/users/:id/tweets — Get posts from a specific user\n\nPosts are the foundation of the X API. Almost every use case involves retrieving, searching, or creating posts in some way.\n\nUsers – Profile Information\n\nWhat you can do: Access user profiles, get follower information, search for users\n\nCommon endpoints:\n\nGET /2/users/by/username/:username — Get user by handle\nGET /2/users/:id — Get user by ID\nGET /2/users/:id/followers — Get user’s followers\n\nUser endpoints let you build profiles, track followers, and verify account information without manually visiting X.\n\nEngagement – Likes, Retweets, Replies\n\nWhat you can do: See engagement metrics, track who liked or retweeted posts, manage user engagement\n\nCommon endpoints:\n\nGET /2/tweets/:id/liked_by — See who liked a post\nPOST /2/users/:id/likes — Like a post\nGET /2/tweets/:id/quote_tweets — Get quote tweets (retweets with added commentary)\n\nEngagement endpoints power analytics dashboards and community management tools by tracking interactions and responses to content.\n\nLists – User Collections\n\nWhat you can do: Create and manage curated lists of users, access posts from list members\n\nCommon endpoints:\n\nGET /2/lists — List your lists\nPOST /2/lists/:id/members — Add member to list\nGET /2/lists/:id/tweets — Get posts from list members\n\nLists are useful for organizing accounts and creating targeted feeds without following everyone publicly.\n\nTrends – What’s Happening Now\n\nWhat you can do: Access real-time trending topics and hashtags\n\nCommon endpoints:\n\nGET /2/trends — Get trending topics\nGET /2/users/personalized_trends — Get personalized trending topics for a user\n\nTrends data powers discovery features and helps applications surface relevant conversations happening right now on X.\n\nFiltered Stream – Real-Time Data\n\nWhat you can do: Subscribe to a real-time stream of posts matching your rules, receive notifications as posts are created\n\nCommon endpoints:\n\nGET /2/tweets/search/stream — Connect to filtered stream\nPOST /2/tweets/search/stream/rules — Create or modify stream rules\n\nFiltered stream is powerful for applications that need real-time updates (monitoring brand mentions, tracking specific keywords, etc.) without constantly polling the search endpoint.\n\nDirect Messages – Private Communication\n\nWhat you can do: Send and receive direct messages, manage conversations\n\nCommon endpoints:\n\nGET /2/dm_events — Retrieve direct messages\nPOST /2/dm_conversations/:id/messages — Send a message\n\nDirect message endpoints enable customer support automation and notification systems built on top of X.\n\nNote: Not all endpoints are available on all tiers. Free tier access is heavily restricted. The Basic tier ($200/month) provides access to most commonly used endpoints. Check the official X API documentation to verify endpoint availability for your tier before building features.\n\nRate Limits and Quota Management\n\nThe X API v2 enforces two types of limits: request rate limits (per 15-minute windows) and monthly post consumption limits (tracked across the calendar month).\n\n📨 Request Rate Limits (Per 15-Minute Windows)\n\nDifferent endpoints have different rate limits based on your tier.\n\nEndpoint Example\tFree Tier\tBasic Tier\tPro Tier\nGET /2/users/:id (lookup user)\t1 req / 24 hours\t100 requests / 24 hours\t900 requests / 15 mins\nPOST /2/tweets (create post)\tNot available\tAvailable\tAvailable\nGET /2/tweets/search/recent\tLimited\tAvailable\t450 requests / 15 mins\n\nFree tier uses per-endpoint limits measured in 24-hour windows (very restrictive). Basic and Pro tiers use 15-minute windows, which are much more generous because the window resets frequently.\n\n📊 Monthly Post Consumption Limits\n\nSeparate from request rate limits, search and stream endpoints consume from a monthly “post quota.” Once consumed, you can’t query these endpoints until the next calendar month.\n\nFree tier: 10,000 posts/month\nBasic tier: 500,000 posts/month\nPro tier: 2,000,000+ posts/month\n\nThese limits apply specifically to: recent search, filtered stream, user timelines, and mention timelines.\n\n🚨 What Happens When You Hit a Limit\n\nWhen you exceed a rate limit, X returns an HTTP 429 (Too Many Requests) error response with a Retry-After header indicating how many seconds to wait before retrying.\n\nWhen you exhaust your monthly post quota, X returns a 429 error indicating the quota limit is reached. You’re blocked from querying that endpoint until the next calendar month begins.\n\nBest Practice: Implement exponential backoff and retry logic in your application. When you receive a 429 error, wait the duration specified in Retry-After before retrying. For monthly quota exhaustion, cache your search results aggressively to avoid querying the same data repeatedly.\n\nFive Optimization Strategies: Reduce Costs and Improve Performance\n\nWith limited rate limits and monthly quotas, optimization directly impacts your application’s capability and cost. Here are proven strategies to reduce API consumption.\n\n1. Use Field Selection to Reduce Response Size\n\nBy default, API responses return many fields you might not need. The fields parameter lets you request only specific data.\n\nInstead of:\n\nGET /2/tweets?ids=TWEET_ID\n\nUse:\n\nGET /2/tweets?ids=TWEET_ID\u0026tweet.fields=created_at,public_metrics\u0026expansions=author_id\u0026user.fields=username\n\nThe second request returns only the data you need, resulting in smaller responses and faster processing.\n\n2. Implement Application-Level Caching\n\nCache API responses in your database or cache layer with appropriate TTL values:\n\nStatic content (usernames, display names): 24 hours\nSemi-dynamic content (post text, engagement counts): 6 hours\nReal-time content (trending topics): 30 minutes to 1 hour\n\nReal impact: A dashboard that previously fetched trending posts every 15 minutes can drop to every 2 hours with caching, reducing daily API calls from 96 to 12—an 87.5% reduction.\n\n3. Batch Requests Whenever Possible\n\nSome endpoints accept multiple IDs in a single request.\n\nInstead of 3 separate requests:\n\nGET /2/tweets?ids=ID1 GET /2/tweets?ids=ID2 GET /2/tweets?ids=ID3\n\nUse 1 batch request:\n\nGET /2/tweets?ids=ID1,ID2,ID3\n\nThis reduces your consumption from 3 requests to 1, saving 67% of your quota.\n\n4. Use Backoff and Retry Logic\n\nWhen hitting rate limits or temporary errors, retry with exponential backoff:\n\nWait 1 second before retry 1\nWait 2 seconds before retry 2\nWait 4 seconds before retry 3\nWait 8 seconds before retry 4\n\nThis prevents hammering the API and gives temporary issues time to resolve.\n\n5. Consider Filtered Stream Instead of Polling\n\nInstead of repeatedly asking “Are there new posts matching my criteria?” (polling), subscribe to webhooks where X pushes notifications when matching posts appear.\n\nPolling approach: Check every 5 minutes = 288 checks/day. Most checks return “no new data” (wasted quota).\n\nFiltered stream approach: Receive notification only when data changes. Zero wasted requests. Real-time updates.\n\nCombined Impact: Applying all five optimization strategies together can reduce your API consumption 70-90% compared to unoptimized code. A dashboard consuming 5,000 units daily can drop to 500-1,500 units through optimization alone, without requesting a quota increase.\n\nError Handling: Common Issues and Solutions\n\nUnderstanding common error codes helps you debug and recover gracefully.\n\nError Code\tHTTP Status\tCause\tSolution\nInvalid Request\t400\tMalformed request or missing required fields\tReview request format, ensure all required parameters present\nUnauthorized\t401\tMissing or invalid credentials\tCheck that Bearer Token or OAuth tokens are correct and not expired\nForbidden\t403\tAuthenticated but not authorized (insufficient permissions)\tRequest additional scopes in your OAuth flow, get user re-approval\nNot Found\t404\tResource doesn’t exist (invalid ID, deleted content)\tVerify resource ID is correct and still exists\nRate Limited\t429\tToo many requests within the time window\tImplement backoff, wait for rate limit window to reset (check Retry-After header)\nQuota Exceeded\t429\tMonthly post quota exhausted\tWait until next calendar month, or request quota increase\n\n🔧 Parsing Error Responses\n\nWhen an error occurs, X returns JSON with details:\n\n{ \"errors\": [ { \"message\": \"The `ids` query parameter value is invalid\", \"type\": \"https://api.x.com/2/problems/invalid-request\" } ] }\n\nBest practice: Always wrap API calls in try-catch blocks and log errors to a monitoring system. This helps you identify patterns and debug issues faster.\n\nGet Your X API Key: Step-by-Step\n\nThe process has simplified significantly compared to the old Twitter API, but there are still critical steps:\n\n🔗 Step 1: Create a Developer Account\n\nNavigate to X Developer Portal\nSign in with your X account (or create one)\nComplete developer profile setup\nAwait approval (typically 5-10 minutes)\n\nFirst-time users will see an onboarding wizard that guides you through creating your first Project and App. If you don’t see this, click “Projects \u0026 Apps” in the left sidebar.\n\n📂 Step 2: Create a Project\n\nA Project is a container for one or more Apps. Think of it as a workspace.\n\nIn the Developer Portal, click “Create Project”\nName your project (e.g., “Analytics Dashboard”)\nDescribe your use case\nSelect your access tier (start with Free for testing)\n\nBy default, you’re on the Free tier. To upgrade: Go to the “Products” section in the developer portal → Find the X API v2 card and click “View Access Levels” → Select the tier you want\n\n🔨 Step 3: Create an App\n\nWithin your project, click “Create App”\nChoose an App name (e.g., “Brand Monitor Bot”)\nAccept terms\nGenerate your API keys\n\n🔑 Step 4: Access Your Credentials\n\nNavigate to your app’s “Keys and Tokens” tab. You’ll find:\n\nAPI Key (Consumer Key): A public identifier for your app. Safe to share in source code.\nAPI Secret Key (Consumer Secret): Keep this secure! Never expose it in client-side code or version control.\nBearer Token (for app-only auth): Used for app-only authentication (read-only, no user context needed). Also keep secure.\nClient ID \u0026 Secret (for OAuth 2.0): OAuth 2.0 credentials. Only visible if you enable OAuth 2.0 in your app settings.\n\nCritical Security Warning: These credentials display only once. Copy them immediately to a secure location (password manager, encrypted file, environment variables). Never commit to version control or publish publicly. If exposed, regenerate immediately.\n\nRecommended Tools \u0026 Resources\n\nOfficial X API Documentation: The authoritative source for all endpoints, parameters, and examples.\nRate Limits Reference: Complete breakdown of all endpoint rate limits by tier.\nX Postman Collection: Pre-built API requests for testing in Postman. Eliminates manual endpoint crafting.\nX Developer Community Forum: Connect with other developers, ask questions, report issues.\nX Dev GitHub: Official sample code, SDKs, and libraries for Python, JavaScript, Java, and more.\nClient Libraries: Official and community-maintained SDKs in multiple languages. Saves time vs. raw HTTP requests.\n\nFAQ: Common Questions About the X API\n\nThe Free tier is available but extremely limited (500 posts/month, 1 request per 24 hours on most endpoints). It’s suitable only for development and proof-of-concept work. For production applications, the Basic tier ($200/month) is the practical minimum.\n\nOAuth 2.0 authenticates on behalf of a specific user and grants permission scopes. Bearer token (app-only) authenticates as your application to access public data. Use OAuth 2.0 when users need to login and grant permissions; use Bearer tokens for public data without user involvement.\n\nOAuth tokens don’t expire automatically—they remain valid until explicitly revoked or regenerated. Best practice: regenerate tokens every 90 days for security. If you suspect a token is compromised, regenerate immediately.\n\nYou receive an HTTP 429 response with a Retry-After header. Implement exponential backoff and retry after the specified duration. Your request is rejected, so no quota is consumed for failed attempts.\n\nYes. Submit a quota increase request through the Google Cloud Console. Provide your use case, user count, and realistic usage estimates. Google reviews and approves/denies based on compliance and legitimacy. Quota increases are free.\n\nFree tier: development and testing only. Basic ($200/month): most real-world projects (content monitoring, automation, small applications). Pro ($5,000/month): high-traffic applications, APIs serving many end users. Enterprise ($42k+): mission-critical systems requiring SLAs and dedicated support.\n\nNeed more help? Check the X Developer Documentation or visit the X Developer Community Forum to connect with other developers and get answers from the community.\n\nNext Steps\n\nBuilding with the X API is straightforward once you understand the pricing, rate limits, and optimization strategies. Whether you’re monitoring brand conversations, automating content, or analyzing trends, the API provides everything you need. Start with a small project, implement the five optimization strategies early, and grow from there.\n\nThe difference between a scalable application and one that struggles often comes down to implementation details. Plan thoroughly, optimize aggressively from day one, and your X integration will thrive. Ready to get started? Head to developer.x.com, create your first project, and begin building!\n\nSupport\n\nIf you have read the instructions but still have any questions, you can always contact our support specialists or read articles in the Help Center.\n\nAsk for help\n\nForum\n\nContact Elfsight peers, share your thoughts, and participate in community activities!\n\nJoin us\n\nWishlist\n\nVisit Wishlist to offer features that you need but the Form Builder doesn’t have yet.\n\nShare Your Idea\n\nHi, I’m Kristina – content manager at Elfsight. My articles cover practical insights and how-to guides on smart widgets that tackle real website challenges, helping you build a stronger online presence.",
    "link": "https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/",
    "snippet": "For production applications, the Basic tier ($200/month) is the practical minimum. What's the difference between OAuth 2.0 and Bearer tokens?",
    "title": "How to Get X API Key: Complete 2026 Guide to Pricing ... - Elfsight"
  },
  {
    "content_readable": "Updated February 2026 — X just launched pay-as-you-go API pricing on February 6. Here's what every tier costs, what changed, and what it means for indie builders.\n\nIf you're building anything that touches X data (a social listening tool, a bot, a startup that depends on post volume) you've probably had a rough couple of years. The X API has been through more pricing changes since Elon Musk's acquisition than most platforms see in a decade.\n\nThe latest change landed on February 6, 2026: X announced a pay-as-you-go model, moving away from fixed monthly tiers for some developers. It's the most significant structural shift since the original price hike that doubled Basic from $100 to $200.\n\nThis guide covers everything: current pricing, what pay-as-you-go actually means in practice, who it helps, and whether alternatives are now worth a serious look.\n\nCurrent X API Pricing Tiers (2026)\n\nThe fixed tier system remains available alongside the new pay-as-you-go option. Here's where things stand:\n\nTier\tMonthly Price\tAnnual Price\tRead Requests\tWrite Requests\nFree\t$0\t$0\tWrite-only (no reads)\t500 posts/month\nBasic\t$200\t$2,100 (save 12.5%)\t15,000/month\t50,000/month\nPro\t$5,000\t$54,000 (save 10%)\t1,000,000/month\tHigher limits\nEnterprise\t$42,000+/month\tCustom\tCustom\tCustom + $1/month per connected account\n\nWhat Each Tier Actually Gets You\n\nFree is write-only and essentially useless for anything that needs to read or analyse posts. 500 writes per month is enough for a simple bot that posts updates, and nothing more. If you were on the old generous free tier, those days are long gone.\n\nBasic at $200/month is the entry point for any real use case. You get 15,000 read requests per month and 50,000 writes.\n\nThat sounds reasonable until you start building something with meaningful volume, 15,000 reads disappears fast if you're doing any kind of monitoring or search. For context, that's roughly 500 reads per day.\n\nPro at $5,000/month is where the cliff edge is. There's no middle ground between $200 and $5,000. One of the most complained-about aspects of the current pricing structure. One million reads per month unlocks at this tier, along with full-archive search and real-time filtering. For most indie builders, this price point is simply out of reach.\n\nEnterprise at $42,000+/month is for large organisations that need complete data access, dedicated support, and custom terms. The additional $1/month per connected account fee is notable for platforms that authenticate many users.\n\nThe Big February 2026 Change: Pay-As-You-Go\n\nOn February 6, 2026, X announced a shift to consumption-based billing, similar to how AWS or Google Cloud charge for compute.\n\nHere's how it works:\n\nInstead of a fixed monthly fee, developers buy credits and spend them per API operation\nDifferent operations have different costs. Reading a post, searching posts, and writing all carry separate prices\nLegacy free tier users who were still active will move to pay-as-you-go and receive a one-time $10 voucher\nBasic and Pro fixed plans remain available for those who prefer predictable billing\nDevelopers can opt into pay-as-you-go from their existing fixed plan\n\nX also added auto top-up settings (credits purchase automatically when balance runs low) and spending caps (requests stop when a monthly limit is hit), which addresses one of the biggest complaints about the old system, the fear of runaway costs.\n\nWho this helps: Developers with inconsistent or low usage who were previously forced into a $200/month commitment even for occasional API calls. If you use the API sporadically, pay-as-you-go could be significantly cheaper.\n\nWho this doesn't help: Anyone with consistent high-volume usage who needs predictable costs. Fixed tiers remain the better option for production apps with steady read volumes.\n\nThe catch: Early analysis suggests pay-as-you-go isn't necessarily cheaper than fixed tiers at equivalent usage levels. The $200 Basic plan gives 15,000 reads per month. Plugging similar usage into the pay-as-you-go model suggests costs could run higher for developers who use the API consistently rather than sporadically.\n\nHow We Got Here: A Timeline\n\nIt's worth understanding how X arrived at this point, because the pricing trajectory matters for how much trust to place in the current structure.\n\nPre-2023 (Twitter era): The free tier offered 500,000 tweets per month. Premium plans ran from $149 to $2,499 per month. The API was a developer playground that enabled thousands of research projects, tools, and businesses.\n\nFebruary 2023: Elon Musk's X ended free API access entirely, introducing the tiered system. The move was framed as tackling the bot problem but was widely read as a revenue play, particularly given X's financial position at the time.\n\n2024: Basic doubled from $100 to $200. The free tier's post limit was cut from 1,500 to 500 per month. Enterprise fees of $1/month per connected account were introduced.\n\nNovember 2025: X launched a closed beta for pay-as-you-go pricing, giving developers in the beta a $500 voucher to experiment.\n\nFebruary 6, 2026: Pay-as-you-go pricing announced broadly, with the fixed tier system remaining alongside it.\n\nThe pattern is consistent: prices up, limits down, with periodic structural changes that keep developers guessing. As indie builder Daniel Nguyen, whose KTool app was directly affected by the original hike, put it: X carries \"a huge risk\" for makers because the platform doesn't offer the same stability or commitment to its developer community as other API providers.\n\nThe Indie Hacker Reality\n\nThe gap between $200 and $5,000 per month is where most of the damage has been done.\n\nA developer building a social listening tool for small businesses at $20 per month per customer needs 250 customers just to cover a Pro plan subscription. That's a real business. And most side projects never get there.\n\nThe community reaction when Basic doubled was telling. As one indie hacker put it at the time: \"This pricing update does not make sense in regards to getting rid of bots. They mostly want to keep their data because that's the most valuable asset they have in the age of AI.\"\n\nThat last point is key. X's data is genuinely valuable for training AI models. The pricing changes reflect that value being recognised and monetised, not just a response to the bot problem.\n\nThe real cost for the ecosystem has been the chilling effect. Tools get shut down before they launch. Researchers work around the API rather than through it. And the platform loses the developer goodwill that made Twitter's API one of the most-used in the world.\n\nShould You Consider Alternatives?\n\nThe third-party X API market has grown significantly since the original price hikes. Options include:\n\nScraping-based alternatives (various providers): Often 90-96% cheaper than the official API, but carry terms of service risk and can be unreliable as X updates its platform\nSocial data aggregators: Platforms that resell X data alongside other social networks, typically starting around $49 to $200/month with more predictable pricing\nPurpose-built tools: For specific use cases like social listening or analytics, off-the-shelf SaaS tools may be cheaper than building on the raw API\n\nBefore switching, factor in integration complexity. Stripe's developer experience warning applies here too. X's official API is well documented and switching to unofficial alternatives introduces reliability and compliance risk that could be more expensive in the long run.\n\nFor production applications that depend on X data, the official API remains the only genuinely safe option. For experimentation, research, or projects that can tolerate some instability, alternatives are worth evaluating.\n\nWhat to Do Right Now\n\nIf you're currently on a fixed Basic or Pro plan: Review whether pay-as-you-go would be cheaper for your actual usage pattern. If your API calls are inconsistent or low-volume, it might be. If you're consistently hitting your read limits, stay on the fixed plan.\n\nIf you're building something new: Factor the full API cost into your unit economics before committing. At $200/month minimum for any meaningful read access, X data needs to be central to your value proposition to justify the cost at early stage.\n\nIf you were on the legacy free tier: You'll be moved to pay-as-you-go with a $10 voucher. Set a spending cap immediately to avoid surprise bills while you evaluate your options.\n\nIf you're at $5,000/month or above: You already know this, but it's worth renegotiating directly with X's enterprise team, custom pricing exists and the $42,000+ floor for enterprise has room to move for the right use case.\n\nThe X API story isn't over. Pay-as-you-go is the latest chapter in an ongoing restructuring of how X monetises its data. Whether it signals a more developer-friendly direction or simply a new way to extract more revenue remains to be seen.\n\nFor now, the best approach is to treat X API costs as a genuine line item in your business model (not an afterthought) and build accordingly.\n\nOriginal story from January 2025 below.\n\nThe X API, a crucial tool for many startups and small businesses, is about to get a lot more expensive.\n\nIn a recent forum post, the X team announced that developers on the platform's Basic usage tier will see their monthly bill double from $100 to $200. This price hike is a significant blow to indie hackers who have long relied on the X API. Before the introduction of tiered pricing, many makers paid nothing (or next to nothing) to use the service.\n\nThe move comes as X, under Elon Musk's ownership, continues to grapple with its bot problem and search for new revenue streams. The collateral damage to legitimate startups is concerning. Unlike other platform providers, X doesn't seem to offer the same stability or investment in its developer community. The abrupt price hikes, coupled with the platform's ongoing struggles, have left many small businesses and indie projects in a precarious position.\n\nFor indie hackers and small startups that have come to rely on the X API, this price hike remains a tough pill to swallow. As the platform continues to evolve under new ownership, the future looks uncertain for the many developers who have built their businesses on X's data and functionality.",
    "link": "https://www.wearefounders.uk/the-x-api-price-hike-a-blow-to-indie-hackers/",
    "snippet": "Current X API Pricing Tiers (2026) ; Free, $0, $0 ; Basic, $200, $2,100 (save 12.5%) ; Pro, $5,000, $54,000 (save 10%) ; Enterprise, $42,000+/month ...",
    "title": "X API Pricing in 2026: Every Tier Explained (And the New Pay-As ..."
  },
  {
    "content_readable": "Crawler is not allowed!",
    "link": "https://devcommunity.x.com/",
    "snippet": "Hello X Developers, We're thrilled to officially announce the launch of our new X API Pay-Per-Use pricing model! This update is designed to empower the heart of ...",
    "title": "X Developers - Twitter"
  },
  {
    "content_readable": "Why X (Twitter) Data APIs Matter in 2026\n\nX (formerly Twitter) remains one of the most valuable sources of real-time public data. With over 500 million monthly active users, the platform generates massive amounts of data that businesses use for:\n\nSocial Listening: Monitor brand mentions, sentiment, and trends\nInfluencer Marketing: Identify and analyze influencers in your niche\nMarket Research: Track industry conversations and competitor activity\nLead Generation: Find potential customers based on their tweets and interests\nContent Strategy: Understand what content resonates with your audience\nCrisis Management: Real-time monitoring for brand reputation\n\nHowever, accessing X data programmatically has become increasingly challenging since the platform's API changes in 2023. This guide compares the best alternatives for developers who need reliable X data access.\n\nHow We Evaluated These Providers\n\nWe tested each provider based on:\n\nData Coverage: Users, tweets, followers, communities, trends\nAPI Performance: Response times and reliability\nPricing: Cost per request and value for money\nRate Limits: Requests per minute/day\nDocumentation: Quality and ease of integration\nData Freshness: Real-time vs cached data\nCompliance: Terms of service and legal considerations\n\n1. Netrows\n\nBest For: Developers needing comprehensive X + LinkedIn data\nStarting Price: $49/month\nX Endpoints: 26 endpoints\nFree Trial: 100 credits\n\nOur Top Pick for Value \u0026 Coverage\n\nX Data Coverage\n\nUsers: Profile info, about, batch lookup, tweets, followers, following, mentions, verified followers\nTweets: Tweet details, replies, quotes, retweeters, threads, articles, search\nLists: List followers and members\nCommunities: Community info, members, moderators, tweets, search\nTrends: Trending topics by location\nSpaces: Space details and participants\n\nPros\n\nMost comprehensive X API coverage (26 endpoints)\nFlexible credit pricing: 1-50 credits per call based on data volume\nCombined with 48 LinkedIn endpoints (74+ total)\nReal-time data, not cached\nFast response times (\u003c2 seconds)\nExcellent documentation with code examples\nNo annual contracts required\n99.9% uptime SLA\n\nCons\n\nNewer X API offering (launched December 2025)\nNo historical tweet archive access\n\nPricing\n\nX endpoints use tiered credit pricing based on data volume: single-item lookups (user info, trends, spaces) cost 1 credit, paginated endpoints returning 20 items cost 5 credits, batch endpoints (up to 100 items) cost 25 credits, and bulk endpoints (followers, following) cost 50 credits but return 200 profiles per request. With the $49/month Starter plan (10,000 credits), you get thousands of X API calls.\n\n2. X (Twitter) Official API\n\nBest For: Enterprise companies with large budgets\nStarting Price: $100/month (Basic), $5,000/month (Pro)\nFree Tier: Very limited (1,500 tweets/month read)\n\nPros\n\nOfficial data source\nFull compliance with X terms\nAccess to full archive (Enterprise)\nStreaming API available\n\nCons\n\nExtremely expensive ($5,000-$42,000/month for useful access)\nSevere rate limits on lower tiers\nComplex approval process\nFree tier practically unusable\nFrequent API changes and deprecations\nPoor developer experience\n\nPricing Tiers\n\nFree: 1,500 tweets/month read, 1 app\nBasic ($100/mo): 10,000 tweets/month read\nPro ($5,000/mo): 1M tweets/month read\nEnterprise ($42,000+/mo): Full access, streaming\n\n3. RapidAPI Twitter APIs\n\nBest For: Quick prototyping and testing\nStarting Price: Varies by provider ($0-$500/month)\nFree Tier: Limited requests\n\nPros\n\nMultiple providers to choose from\nEasy to test different options\nSome free tiers available\nUnified billing through RapidAPI\n\nCons\n\nInconsistent data quality across providers\nMany providers are unreliable\nLimited support\nNo SLA guarantees\nProviders frequently go offline\n\n4. Apify Twitter Scrapers\n\nBest For: One-time data collection projects\nStarting Price: $49/month (platform fee) + usage\nFree Tier: $5 free credits\n\nPros\n\nFlexible scraping options\nCan customize data extraction\nGood for bulk historical data\nMultiple Twitter actors available\n\nCons\n\nNot a real-time API\nScraping can be unreliable\nMay violate X terms of service\nRequires technical setup\nRate limited by X's anti-scraping measures\n\n5. Brandwatch\n\nBest For: Enterprise social listening\nStarting Price: Custom (typically $800+/month)\nFree Tier: Demo only\n\nPros\n\nComprehensive social listening platform\nHistorical data access\nSentiment analysis included\nMulti-platform coverage\n\nCons\n\nVery expensive\nNot developer-focused (UI-first)\nLimited API access\nAnnual contracts required\nOverkill for simple data needs\n\n6. Sprout Social\n\nBest For: Social media management teams\nStarting Price: $249/month\nFree Tier: 30-day trial\n\nPros\n\nAll-in-one social management\nGood analytics dashboard\nTeam collaboration features\nPublishing and scheduling\n\nCons\n\nNot an API provider\nLimited data export options\nExpensive for data access alone\nFocused on marketing, not developers\n\n7. Tweepy (Python Library)\n\nBest For: Python developers using official API\nStarting Price: Free (library) + X API costs\nFree Tier: Open source\n\nPros\n\nFree and open source\nWell-documented\nActive community\nEasy to use for Python developers\n\nCons\n\nStill requires X API access (expensive)\nPython only\nSubject to X API limitations\nNo additional data beyond official API\n\nPricing Comparison Table\n\nProvider\tStarting Price\tX Endpoints\tReal-time\nNetrows\t$49/mo\t26\nX Official (Pro)\t$5,000/mo\tFull\nRapidAPI\tVaries\tVaries\nApify\t$49/mo+\tScraping\nBrandwatch\t$800+/mo\tLimited\n\nWhich Provider Should You Choose?\n\nFor Developers \u0026 Startups\n\nRecommendation: Netrows\nBest value with 26 X endpoints at half-price credits. Combined with LinkedIn data, it's the most comprehensive B2B data API at $49/month. Perfect for building applications that need both professional and social data.\n\nFor Enterprise Social Listening\n\nRecommendation: X Official API (Enterprise) or Brandwatch\nIf you need full historical access, streaming, and have budget for $42,000+/year, the official API is the safest choice. Brandwatch is better if you need a complete social listening platform with analytics.\n\nFor Quick Prototyping\n\nRecommendation: Netrows or RapidAPI\nNetrows offers 100 free credits to test. RapidAPI has various free tiers but quality varies significantly.\n\nFor One-Time Data Collection\n\nRecommendation: Apify\nIf you need bulk historical data for a one-time project and don't need real-time access, Apify scrapers can work. Be aware of potential ToS issues.\n\nFrequently Asked Questions\n\nIs it legal to access X data through third-party APIs?\n\nYes, as long as the provider has legitimate access to the data. Providers like Netrows access publicly available data in compliance with applicable laws. Always check the provider's terms of service and ensure your use case is compliant.\n\nWhy is the official X API so expensive?\n\nX significantly increased API pricing in 2023 to monetize data access. The Basic tier ($100/mo) is too limited for most use cases, pushing developers to Pro ($5,000/mo) or Enterprise ($42,000+/mo) tiers.\n\nWhat X data can I access through Netrows?\n\nNetrows provides 26 X endpoints covering: user profiles, followers, following, tweets, replies, quotes, retweeters, lists, communities, trends, and spaces. All data is fetched in real-time.\n\nCan I get historical tweets?\n\nMost third-party providers (including Netrows) provide recent tweets and user timelines. For full historical archive access (tweets from years ago), you need X's Enterprise API tier.\n\nWhat's the best X API for influencer analysis?\n\nNetrows is ideal for influencer analysis with endpoints for followers, following, verified followers, engagement metrics, and user search. You can identify influencers, analyze their audience, and track their content.\n\nDo I need both X and LinkedIn data?\n\nFor B2B use cases, combining X and LinkedIn data provides the most complete picture. LinkedIn for professional background, X for real-time activity and interests. Netrows is the only provider offering both in one API.\n\nTry the Best X Data API\n\nNetrows offers 26 X endpoints plus 48 LinkedIn endpoints in one API. Get started with 100 free credits today.",
    "link": "https://netrows.com/blog/top-twitter-x-data-api-providers-2026",
    "snippet": "3. RapidAPI Twitter APIs ; Best For: Quick prototyping and testing ; Starting Price: Varies by provider ($0-$500/month) ; Free Tier: Limited ...",
    "title": "Top Twitter/X Data API Providers Compared (2026) - Netrows"
  },
  {
    "content_readable": "Crawler is not allowed!",
    "link": "https://devcommunity.x.com/t/announcing-the-x-api-pay-per-use-pricing-pilot/250253",
    "snippet": "Pricing Details ; Post (Read): $0.005 per Post fetched. ; User (Read): $0.01 per User fetched. ; DM Event (Read): $0.01 per DM Event fetched.",
    "title": "Announcing the X API Pay-Per-Use Pricing Pilot"
  },
  {
    "content_readable": "Does Twitter API Cost Money?\n\nSo, you’re diving into the world of Twitter’s API and wondering about the cost? Let’s break it down in a way that’s easy to understand. The short answer is that it depends on your usage and the level of access you need. Twitter, now known as X, has restructured its API offerings, and understanding the different tiers is crucial to avoid unexpected charges. In the past, Twitter offered more generous free access, but those days are largely gone. Nowadays, accessing the Twitter API typically involves some level of payment, especially if you’re building applications or tools that rely heavily on real-time data or large-scale data analysis. The main reason for this shift is to control the usage and ensure the stability of their platform. Think of it this way: providing free, unlimited access to their API could lead to abuse and strain their infrastructure. By implementing a paid model, Twitter aims to maintain a sustainable ecosystem for developers while also generating revenue. But don’t worry, there are still some options that might fit your budget, depending on what you’re trying to achieve.\n\nTable of Contents\n\nUnderstanding Twitter API Pricing Tiers\nFactors Influencing the Cost of Twitter API\nHow to Check Twitter API Pricing\nAlternatives to Paid Twitter API Access\nTips for Minimizing Twitter API Costs\nConclusion: Is the Twitter API Worth the Cost?\n\nUnderstanding Twitter API Pricing Tiers\n\nTo really grasp whether the Twitter API costs money for you, you’ve got to get familiar with the different pricing tiers they offer. Basically, Twitter provides various levels of access, each tailored to different needs and use cases, and each comes with its own price tag. The free tier, which was available in the past, has been significantly limited. It primarily caters to very basic use cases, such as academic research or personal projects with minimal data requirements. If you’re planning anything beyond simple, infrequent requests, you’ll likely need to consider a paid plan. The basic tier is designed for hobbyists and smaller projects. This tier usually includes access to essential endpoints, allowing you to read and write tweets, follow users, and perform basic searches. However, it comes with rate limits, which restrict the number of requests you can make within a specific timeframe. If you exceed these limits, your application might get throttled or even blocked. The enterprise tier is where things get serious. This is intended for businesses and organizations that require high-volume data access, real-time streaming, and advanced analytics. It offers more extensive endpoints, higher rate limits, and dedicated support. The pricing for this tier is usually custom, depending on your specific needs and usage. You’ll need to contact Twitter directly to discuss your requirements and get a quote. It’s also worth noting that Twitter occasionally introduces new tiers or modifies the existing ones, so it’s always a good idea to check their official developer documentation for the most up-to-date information. Keep an eye on any announcements from Twitter’s developer relations team, as they often provide insights into pricing changes and new features.\n\nFactors Influencing the Cost of Twitter API\n\nThe cost of accessing the Twitter API isn’t just about picking a tier; several factors can influence how much you end up paying. Let’s dive into some of these key elements. Data volume is a big one. The more data you pull from Twitter, the more you’re likely to pay. This is especially true if you’re using the API for large-scale data analysis or monitoring. Different tiers offer varying levels of data access, and exceeding those limits can lead to additional charges. Rate limits also play a crucial role. Each API endpoint has a rate limit, which determines how many requests you can make within a specific time window. If your application needs to make frequent requests, you’ll need a tier that offers higher rate limits, which usually comes at a higher cost. The specific endpoints you need access to can also affect the price. Some endpoints, such as those that provide real-time streaming data or historical data, might be considered premium and require a higher-tier subscription. Your intended use case matters too. Twitter might offer different pricing structures for different types of applications. For example, academic researchers might be eligible for discounted rates or special access programs. Finally, keep in mind that Twitter can change its pricing policies at any time. It’s essential to stay updated with the latest announcements and documentation to avoid any surprises. Regularly reviewing your usage and optimizing your API calls can also help you manage your costs effectively. So, before you start building your application, take the time to carefully assess your data needs, rate limit requirements, and the specific endpoints you’ll be using. This will help you choose the right tier and avoid overpaying.\n\nHow to Check Twitter API Pricing\n\nOkay, so you’re ready to figure out exactly how much the Twitter API will cost you? Here’s a step-by-step guide to checking the current pricing and understanding what you’ll be paying for. First off, head over to the Twitter Developer Platform website. This is your go-to resource for all things API-related. Look for the “Pricing” or “Plans” section. It’s usually located in the navigation menu or within the developer documentation. Once you find the pricing page, you’ll see a breakdown of the different tiers available. Each tier should list its features, rate limits, and, of course, the price. Take your time to compare the tiers and see which one best fits your needs. If you have specific requirements that aren’t covered by the standard tiers, you might need to contact Twitter’s sales team directly. They can provide custom pricing options tailored to your use case. To do this, look for a “Contact Sales” or “Get a Quote” link on the pricing page. When you reach out to sales, be prepared to provide detailed information about your project, including the expected data volume, rate limit requirements, and the specific endpoints you’ll be using. This will help them provide an accurate quote. Also, don’t forget to check the fine print. Look for any hidden fees or additional charges that might apply. For example, some tiers might charge extra for exceeding rate limits or accessing premium endpoints. Finally, stay updated with any announcements from Twitter regarding pricing changes. They often announce these changes on their developer blog or through their official Twitter account. By following these steps, you’ll be well-equipped to understand the Twitter API pricing and make an informed decision about which tier is right for you.\n\nAlternatives to Paid Twitter API Access\n\nAlright, so the Twitter API pricing might be a bit of a buzzkill. But don’t throw in the towel just yet! There are a few alternative routes you can explore if you’re looking to minimize costs or avoid paying altogether. One option is to explore open-source libraries and tools. These can sometimes provide access to Twitter data without directly using the official API. However, keep in mind that these tools might have limitations and may not be as reliable as the official API. Another approach is to use third-party APIs or data providers. These services often offer aggregated Twitter data at a lower cost than the official API. They might scrape Twitter data or use other methods to collect and provide the information you need. Just be sure to check the terms of service and ensure that you’re complying with Twitter’s policies. For academic research, Twitter sometimes offers special access programs or discounted rates. If you’re a researcher, it’s worth exploring these options. You might be able to get access to the API for free or at a reduced cost. If your needs are very limited, you might be able to get by with the basic free access that Twitter provides. This might be enough for small personal projects or simple tasks. However, be aware that the free tier has significant limitations and might not be suitable for anything beyond basic usage. Finally, consider whether you really need real-time data. If you can get by with historical data, you might be able to find datasets or archives that are available for free or at a lower cost. By exploring these alternatives, you might be able to find a solution that fits your budget and meets your needs. Just be sure to do your research and understand the limitations of each option before making a decision.\n\nTips for Minimizing Twitter API Costs\n\nOkay, so you’ve decided to use the Twitter API, but you want to keep those costs as low as possible? Smart move! Here are some practical tips to help you minimize your expenses. First and foremost, optimize your API requests. Only request the data you actually need. The more data you request, the more you’re likely to pay. Use the API’s filtering and pagination options to narrow down your results and avoid unnecessary data transfer. Cache your data whenever possible. If you’re repeatedly requesting the same data, store it locally and only update it periodically. This will reduce the number of API calls you need to make and save you money. Monitor your API usage regularly. Keep an eye on how many requests you’re making and identify any areas where you can optimize. Twitter provides usage dashboards and analytics tools that can help you track your API consumption. Implement error handling and retry mechanisms. If your application encounters errors, don’t just keep retrying the same request. Implement exponential backoff to avoid overwhelming the API and incurring unnecessary charges. Use webhooks instead of polling. Webhooks allow Twitter to push data to your application in real-time, rather than you having to constantly poll the API. This can significantly reduce the number of requests you need to make. Consider using compression to reduce the size of the data you’re transferring. This can help you save on bandwidth costs and improve the performance of your application. Review your code regularly to identify and fix any inefficient API calls. Even small optimizations can add up over time and save you a significant amount of money. Finally, stay updated with Twitter’s API documentation and best practices. By following these tips, you can significantly reduce your Twitter API costs and make your application more efficient.\n\nConclusion: Is the Twitter API Worth the Cost?\n\nSo, we’ve covered a lot about the Twitter API and its costs. The big question remains: is it worth the investment? Well, like most things, it depends. If you’re a business that relies on real-time Twitter data for marketing, customer service, or data analysis, then the API is likely worth the cost. It provides access to valuable insights and allows you to automate tasks that would otherwise be time-consuming and expensive. For researchers, the Twitter API can be a valuable tool for studying social trends, public opinion, and more. While the costs can be a barrier, the insights gained can often justify the investment. If you’re a hobbyist or developer working on a personal project, the decision is a bit more nuanced. You’ll need to carefully weigh the costs against the benefits and consider whether there are any alternative solutions that might meet your needs. Ultimately, the value of the Twitter API depends on your specific goals, budget, and technical expertise. If you’re willing to invest the time and effort to optimize your API usage and explore alternative solutions, you can often find a way to make it work for you. Just remember to stay informed about Twitter’s pricing policies and best practices, and don’t be afraid to experiment and iterate. By carefully considering all these factors, you can make an informed decision about whether the Twitter API is the right choice for you. Remember to always check the most recent data available on the X developer platform for the most accurate information. Have fun!",
    "link": "https://cbconnect-api-dev.resultsathand.com/tech-signal/twitter-api-cost-is-access-free-or-paid-1764797574",
    "snippet": "Let's break it down in a way that's easy to understand. The short answer is that it depends on your usage and the level of access you need.",
    "title": "Twitter API Cost: Is Access Free Or Paid? - Resultsathand"
  },
  {
    "content_readable": "This is part one of the Advanced Use Cases series:\n\n1️⃣ Extract Metadata from Queries to Improve Retrieval\n\n2️⃣ Query Expansion\n\n3️⃣ Query Decomposition\n\n4️⃣ Automated Metadata Enrichment\n\nSometimes a single question is multiple questions in disguise. For example: “Did Microsoft or Google make more money last year?”. To get to the correct answer for this seemingly simple question, we actually have to break it down: “How much money did Google make last year?” and “How much money did Microsoft make last year?”. Only if we know the answer to these 2 questions can we reason about the final answer.\n\nThis is where query decomposition comes in. This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach:\n\nDecompose the original question into smaller questions that can be answered independently to each other. Let’s call these ‘sub questions’ here on out.\nReason about the final answer to the original question, based on each sub-answer.\n\nWhile for many query/dataset combinations, this may not be required, for some, it very well may be. At the end of the day, often one query results in one retrieval step. If within that one single retrieval step we are unable to have the retriever return both the money Microsoft made last year and Google, then the system will struggle to produce an accurate final response.\n\nThis method ensures that we are:\n\nretrieving the relevant context for each sub question.\nreasoning about the final answer given each answer based on the contexts retrieved for each sub question.\n\nIn this article, I’ll be going through some key steps that allow you to achieve this. You can find the full working example and code in the linked recipe from our cookbook. Here, I’ll only show the most relevant parts of the code.\n\n🚀 I’m sneaking something extra into this article. I saw the opportunity to try out the structured output functionality (currently in beta) by OpenAI to create this example. For this step, I extended the OpenAIGenerator in Haystack to be able to work with Pydantic schemas. More on this in the next step.\n\nLet’s try build a full pipeline that makes use of query decomposition and reasoning. We’ll use a dataset about Game of Thrones (a classic for Haystack) which you can find preprocessed and chunked on Tuana/game-of-thrones on Hugging Face Datasets.\n\nDefining our Questions Structure\n\nOur first step is to create a structure within which we can contain the subquestions, and each of their answers. This will be used by our OpenAIGenerator to produce a structured output.\n\nfrom pydantic import BaseModel\n\nclass Question(BaseModel):\n    question: str\n    answer: Optional[str] = None\n\nclass Questions(BaseModel):\n    questions: list[Question]\n\n\nThe structure is simple, we have Questions made up of a list of Question. Each Question has the question string as well as an optional answer to that question.\n\nDefining the Prompt for Query Decomposition\n\nNext up, we need to get an LLM to decompose a question and produce multiple questions. Here, we will start making use of our Questions schema.\n\nsplitter_prompt = \"\"\"\nYou are a helpful assistant that prepares queries that will be sent to a search component.\nSometimes, these queries are very complex.\nYour job is to simplify complex queries into multiple queries that can be answered\nin isolation to eachother.\n\nIf the query is simple, then keep it as it is.\nExamples\n1. Query: Did Microsoft or Google make more money last year?\n   Decomposed Questions: [Question(question='How much profit did Microsoft make last year?', answer=None), Question(question='How much profit did Google make last year?', answer=None)]\n2. Query: What is the capital of France?\n   Decomposed Questions: [Question(question='What is the capital of France?', answer=None)]\n3. Query: {{question}}\n   Decomposed Questions:\n\"\"\"\n\nbuilder = PromptBuilder(splitter_prompt)\nllm = OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions})\n\n\nAnswering Each Sub Question\n\nFirst, let’s build a pipeline that uses the splitter_prompt to decompose our question:\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question}})\nprint(result[\"llm\"][\"structured_reply\"])\n\n\nThis produces the following Questions (List[Question])\n\nquestions=[Question(question='How many siblings does Jamie have?', answer=None), \n           Question(question='How many siblings does Sansa have?', answer=None)]\n\n\nNow, we have to fill in the answer fields. For this step, we need to have a separate prompt and two custom components:\n\nThe CohereMultiTextEmbedder which can take multiple questions rather than a single one like the CohereTextEmbedder.\nThe MultiQueryInMemoryEmbeddingRetriever which can again, take multiple questions and their embeddings, returning question_context_pairs. Each pair contains the question and documents that are relevant to that question.\n\nNext, we need to construct a prompt that can instruct a model to answer each subquestion:\n\nmulti_query_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nAnd you have split the task into the following questions:\n{% for pair in question_context_pairs %}\n  {{pair.question}}\n{% endfor %}\n\nHere are the question and context pairs for each question.\nFor each question, generate the question answer pair as a structured output\n{% for pair in question_context_pairs %}\n  Question: {{pair.question}}\n  Context: {{pair.documents}}\n{% endfor %}\nAnswers:\n\"\"\"\n\nmulti_query_prompt = PromptBuilder(multi_query_template)\n\n\nLet’s build a pipeline that can answer each individual sub question. We will call this the query_decomposition_pipeline :\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\nquery_decomposition_pipeline.add_component(\"embedder\", CohereMultiTextEmbedder(model=\"embed-multilingual-v3.0\"))\nquery_decomposition_pipeline.add_component(\"multi_query_retriever\", MultiQueryInMemoryEmbeddingRetriever(InMemoryEmbeddingRetriever(document_store=document_store)))\nquery_decomposition_pipeline.add_component(\"multi_query_prompt\", PromptBuilder(multi_query_template))\nquery_decomposition_pipeline.add_component(\"query_resolver_llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"embedder.embeddings\", \"multi_query_retriever.query_embeddings\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"multi_query_retriever.queries\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"multi_query_retriever.question_context_pairs\", \"multi_query_prompt.question_context_pairs\")\nquery_decomposition_pipeline.connect(\"multi_query_prompt\", \"query_resolver_llm\")\n\n\nRunning this pipeline with the original question “Who has more siblings, Jamie or Sansa?”, results in the following structured output:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question}})\n\nprint(result[\"query_resolver_llm\"][\"structured_reply\"])\n\n\nquestions=[Question(question='How many siblings does Jamie have?', answer='2 (Cersei Lannister, Tyrion Lannister)'),\n           Question(question='How many siblings does Sansa have?', answer='5 (Robb Stark, Arya Stark, Bran Stark, Rickon Stark, Jon Snow)')]\n\n\nReasoning About the Final Answer\n\nThe final step we have to take is to reason about the ultimate answer to the original question. Again, we create a prompt that will instruct an LLM to do this. Given we have the questions output that contains each sub question and answer, we will make these inputs to this final prompt.\n\nreasoning_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nYou have split this question up into simpler questions that can be answered in\nisolation.\nHere are the questions and answers that you've generated\n{% for pair in question_answer_pair %}\n  {{pair}}\n{% endfor %}\n\nReason about the final answer to the original query based on these questions and\naswers\nFinal Answer:\n\"\"\"\n\nresoning_prompt = PromptBuilder(reasoning_template)\n\n\nTo be able to augment this prompt with the question answer pairs, we will have to extend our previous pipeline and connect the structured_reply from the previous LLM, to the question_answer_pair input of this prompt.\n\nquery_decomposition_pipeline.add_component(\"reasoning_prompt\", PromptBuilder(reasoning_template))\nquery_decomposition_pipeline.add_component(\"reasoning_llm\", OpenAIGenerator(model=\"gpt-4o-mini\"))\n\nquery_decomposition_pipeline.connect(\"query_resolver_llm.structured_reply\", \"reasoning_prompt.question_answer_pair\")\nquery_decomposition_pipeline.connect(\"reasoning_prompt\", \"reasoning_llm\")\n\n\nNow, let’s run this final pipeline and see what results we get:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question},\n                                           \"reasoning_prompt\": {\"question\": question}},\n                                           include_outputs_from=[\"query_resolver_llm\"])\n\nprint(\"The original query was split and resolved:\\n\")\n\nfor pair in result[\"query_resolver_llm\"][\"structured_reply\"].questions:\n  print(pair)\nprint(\"\\nSo the original query is answered as follows:\\n\")\nprint(result[\"reasoning_llm\"][\"replies\"][0])\n\n\n🥁 Drum roll please:\n\nThe original query was split and resolved:\n\nquestion='How many siblings does Jaime have?' answer='Jaime has one sister (Cersei) and one younger brother (Tyrion), making a total of 2 siblings.'\nquestion='How many siblings does Sansa have?' answer='Sansa has five siblings: one older brother (Robb), one younger sister (Arya), and two younger brothers (Bran and Rickon), as well as one older illegitimate half-brother (Jon Snow).'\n\nSo the original query is answered as follows:\n\nTo determine who has more siblings between Jaime and Sansa, we need to compare the number of siblings each has based on the provided answers.\n\nFrom the answers:\n- Jaime has 2 siblings (Cersei and Tyrion).\n- Sansa has 5 siblings (Robb, Arya, Bran, Rickon, and Jon Snow).\n\nSince Sansa has 5 siblings and Jaime has 2 siblings, we can conclude that Sansa has more siblings than Jaime.\n\nFinal Answer: Sansa has more siblings than Jaime.\n\n\nWrapping up\n\nGiven the right instructions, LLMs are good at breaking down tasks. Query decomposition is a great way we can make sure we do that for questions that are multiple questions in disguise.\n\nIn this article, you learned how to implement this technique with a twist 🙂 Let us know what you think about using structured outputs for these sorts of use cases. And check out the Haystack experimental repo to see what new features we’re working on.",
    "link": "https://haystack.deepset.ai/blog/query-decomposition",
    "snippet": "This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach.",
    "title": "Advanced RAG: Query Decomposition \u0026 Reasoning - Haystack"
  },
  {
    "content_readable": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and relative date parameters that are recognized in search queries.\n\nSeveral references on this page are not available in Simple Search. Switch to Advanced Search to access them.\n\nIssue Attributes\n\nEvery issue has base attributes that are set automatically by YouTrack. These include the issue ID, the user who created or applied the last update to the issue, and so on.\n\nThese search attributes represent an \u003cAttribute\u003e in the Search Query Grammar. Their values correspond to the \u003cValue\u003e or \u003cValueRange\u003e parameter.\n\nAttribute-based search uses the syntax attribute: value.\n\nYou can specify multiple values for the target attribute, separated by commas.\n\nExclude specific values from the search results with the syntax attribute: -value.\n\nIn many cases, you can omit the attribute and reference values directly with the # or - symbols. For additional guidelines, see Advanced Search.\n\nattachment text\n\nattachment text: \u003ctext\u003e\n\nReturns issues that include image attachments with the specified text.\n\nattachments\n\nattachments: \u003ctext\u003e\n\nReturns issues that include attachments with the specified filename.\n\nBoard\n\nBoard \u003cboard name\u003e: \u003csprint name\u003e\n\nReturns issues that are assigned to the specified sprint on the specified agile board. To find issues that are assigned to agile boards with sprints disabled, use has: \u003cboard name\u003e.\n\ncc recipients\n\ncc recipients: \u003cuser\u003e\n\nReturns tickets where the specified users are added as CCs.\n\ncode\n\ncode: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words that are formatted as code in the issue description or comments. This includes matches that are formatted as inline code spans, indented and fenced code blocks, and stack traces.\n\ncommented: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues to which comments were added on the specified date or within the specified period.\n\ncommenter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were commented by the specified user or by a member of the specified group.\n\ncomments: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in a comment.\n\ncreated\n\ncreated: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that were created on a specific date or within a specified time frame.\n\ndescription\n\ndescription: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in the issue description.\n\ndocument type\n\ndocument type: Issue | Ticket\n\nReturns either issue or ticket type documents.\n\nGantt\n\nGantt: \u003cchart name\u003e\n\nReturns issues that are assigned to the specified Gantt chart.\n\nhas\n\nhas: \u003cattribute\u003e\n\nThe has keyword functions as a Boolean search term. When used in a search query, it returns all issues that contain a value for the specified attribute. Use the minus operator (-) before the specified attribute to find issues that have empty values.\n\nFor example, to find all issues in the TST project that are assigned to the current user, have a duplicates link, have attachments, but do not have any comments, enter in: TST for: me has: duplicates , attachments , -comments.\n\nYou can use the has keyword in combination with the following attributes:\n\nAttribute\n\nDescription\n\nattachments\n\nReturns issues that have attachments.\n\nboards\n\nReturns issues that are assigned to at least one agile board. When used with an exclusion operator (-), returns issues that aren't assigned to any boards.\n\nBoard \u003cboard name\u003e\n\nReturns issues that are assigned to the specified agile board.\n\ncomments\n\nReturns issues that have one or more comments.\n\ndescription\n\nReturns issues that do not have an empty description.\n\n\u003cfield name\u003e\n\nReturns issues that contain any value in the specified custom field. Enclose field names that contain spaces in braces.\n\nGantt\n\nReturns issues that are assigned to any Gantt chart.\n\n\u003clink type name\u003e\n\nReturns issues that have links that match the specified outward name or inward name. Enclose link names that contain spaces in braces.\n\nFor example, to find issues that are linked as subtasks to parent issues, use:\n\nhas: {Subtask of}\n\nTo find issues that aren't linked to a parent issue, use:\n\nhas: -{Subtask of}\n\nlinks\n\nReturns issues that have any issue link type.\n\nstar\n\nReturns issues that have the star tag for the current user.\n\nunderestimation\n\nReturns issues where the total spent time is greater than the original estimation value.\n\nvcs changes\n\nReturns issues that contain vcs changes.\n\nvotes\n\nReturns issues that have one or more votes.\n\nwork\n\nReturns issues that have one or more work items.\n\nissue ID\n\nissue ID: \u003cissue ID\u003e, #\u003cissue ID\u003e\n\nReturns an issue that matches the specified issue ID. This attribute can also be referenced as a single value with the syntax #\u003cissue ID\u003e or -\u003cissue ID\u003e. When the search returns a single issue, the result is displayed in single issue view.\n\nIf you don't use the syntax for an attribute-based search (issue ID: \u003cvalue\u003e or #\u003cvalue\u003e), the input is also parsed as a text search. In addition to any issue that matches the specified issue ID, the search results include any issue that contains the specified ID in any text attribute.\n\nIf you set the issue ID in quotes, the input is only parsed as a text search. The search results only include issues that contain the specified ID in a text attribute.\n\nNote that even when an issue ID is parsed as a text search, the results do not include issue links. To find issues based on issue links, use the links attribute or reference a specific link type.\n\nlinks\n\nlinks: \u003cissue ID\u003e\n\nReturns all issues that contain links to the specified issue.\n\nlooks like\n\nlooks like: \u003cissue ID\u003e\n\nReturns issues in which the issue summary or description contains words that are found in the issue summary or description in the specified issue. Issues that contain matching words in the issue summary are given higher weight when the search results are sorted by relevance.\n\nmentioned in\n\nmentioned in: \u003cissue id\u003e\n\nReturns issues with issue IDs referenced in the description or a comment of the target issue. Issue IDs in supplemental text fields aren't included in the search results.\n\nmentions\n\nmentions: \u003cissue id\u003e, \u003cuser\u003e\n\nReturns issues that contain either @mention for the specified user or issue IDs referenced in the description or a comment. User mentions and issue IDs in supplemental text fields aren't included in the search results.\n\norganization\n\norganization: \u003corganization name\u003e\n\nReturns issues that belong to the specified organization. This attribute can also be referenced as a single value.\n\nproject\n\nproject: \u003cproject name\u003e | \u003cproject ID\u003e\n\nReturns issues that belong to the specified project. This attribute can also be referenced as a single value.\n\nreporter\n\nreporter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues and tickets that were created by the specified user or a member of the specified group, including tickets created on behalf of the specified user. Use me to return issues that were created by the current user.\n\nresolved date\n\nresolved date: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that were resolved on a specific date or within a specified time frame.\n\nsaved search\n\nsaved search: \u003csaved search name\u003e\n\nReturns issues that match the search criteria of a saved search. This attribute can also be referenced as a single value with the syntax #\u003csaved search name\u003e or -\u003csaved search name\u003e.\n\nsubmitter\n\nsubmitter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were submitted by the specified user or a member of the specified group on behalf of another user. Use me to return issues that were submitted by the current user.\n\nsummary\n\nsummary: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in the issue summary.\n\ntag\n\ntag: \u003ctag name\u003e\n\nReturns issues that match a specified tag. This attribute can also be referenced as a single value with the syntax #\u003ctag name\u003e or -\u003ctag name\u003e\n\nupdated\n\nupdated: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues where the most recent change occurred on a specific date or within a specified time frame.\n\nupdater\n\nupdater: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were last updated by the specified user or a member of the specified group. Use me to return issues to which you applied the last update.\n\nvcs changes\n\nvcs changes: \u003ccommit hash\u003e\n\nReturns issues that contain vcs changes that were applied in the commit object that is identified by the specified SHA-1 commit hash.\n\nvisible to\n\nvisible to: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that are visible to the specified user or a member of the specified group.\n\nvoter\n\nvoter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that have votes from the specified user or a member of the specified group.\n\nCustom Fields\n\nYou can find issues that are assigned specific values in a custom field. As with other issue attributes, you use the syntax attribute: value or attribute: -value. In this case, the attribute is the name of the custom field. In most cases, you can reference values directly with the # or - symbols.\n\nFor custom fields that are assigned an empty value, you can reference this property as a value. For example, to search for issues that are not assigned to a specific user, enter Assignee: Unassigned or #Unassigned. If the field is not assigned an empty value, find issues that do not store a value in the field with the syntax \u003cfield name\u003e: {No \u003cfield name\u003e} or has: -\u003cfield name\u003e.\n\nThis section lists the search attributes for default custom fields. Note that default fields and their values can be customized. The actual field names, values, and aliases may vary.\n\nAffected versions\n\nAffected versions: \u003cvalue\u003e\n\nReturns issues that were detected in a specific version of the product.\n\nAssignee\n\nAssignee: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns all issues that are assigned to the specified user or a member of the specified group.\n\nFix versions\n\nFix versions: \u003cvalue\u003e\n\nReturns issues that were fixed in a specific version of the product.\n\nFixed in build\n\nFixed in build: \u003cvalue\u003e\n\nReturns issues that were fixed in the specified build.\n\nPriority\n\nPriority: \u003cvalue\u003e\n\nReturns issues that match the specified priority level.\n\nState\n\nState: \u003cvalue\u003e | Resolved | Unresolved\n\nReturns issues that match the specified state.\n\nThe Resolved and Unresolved states cannot be assigned to an issue directly, as they are properties of specific values that are stored in the State field.\n\nBy default, Fixed, Won't fix, Duplicate, Incomplete, Obsolete, and Can't reproduce states are set as Resolved.\n\nThe Submitted, Open, In Progress, Reopened, and To be discussed states are set as Unresolved.\n\nSubsystem\n\nSubsystem: \u003cvalue\u003e\n\nReturns issues that are assigned to a specific subsystem within a project.\n\nType\n\nType: \u003cvalue\u003e\n\nReturns issues that match the specified issue type.\n\nIssue Links\n\nYou can search for issues based on the links that connect them to other issues. Search queries that reference a specific issue link type can be interpreted in two different ways:\n\nWhen specified as \u003clink type\u003e: \u003cissue ID\u003e, the query returns issues linked to the specified issue using this link type.\n\nUsing \u003clink type\u003e: (\u003csub-query\u003e), the query returns issues linked to any issue that matches the specified sub-query using this link type.\n\nWhen searching for linked issues, you can enter the outward name or inward name of any issue link type, then specify your search criteria.\n\nThis list contains search parameters for issue link types that are provided by default in YouTrack. The default issue link types can be customized, which means that the actual names may vary. You can also use this syntax to build search queries that refer to custom link types.\n\nlinks\n\nlinks: \u003cissue ID\u003e\n\nReturns issues that are linked to a target issue.\n\naggregate\n\naggregate \u003caggregation link type\u003e: \u003cissue ID\u003e\n\nReturns issues that are indirectly linked to a target issue. Use this search term to find, for example, issues that are parent issues for a parent issue or subtasks of issues that are also subtasks of a target issue. The results include any issue that is linked to the target issue using the specified link type, whether directly or indirectly.\n\nThis search argument is only compatible with aggregation link types.\n\nDepends on\n\nDepends on: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have depends on links to a target issue or any issue that matches the specified sub-query.\n\nDuplicates\n\nDuplicates: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have duplicates links to a target issue or any issue that matches the specified sub-query.\n\nIs duplicated by\n\nIs duplicated by: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have is duplicated by links to a target issue or any issue that matches the specified sub-query.\n\nIs required for\n\nIs required for: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have is required for links to a target issue or any issue that matches the specified sub-query.\n\nParent for\n\nParent for: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have parent for links to a target issue or any issue that matches the specified sub-query.\n\nRelates to\n\nRelates to: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have relates to links to a target issue or any issue that matches the specified sub-query.\n\nSubtask of\n\nSubtask of: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have subtask of links to a target issue or any issue that matches the specified sub-query.\n\nTime Tracking\n\nThere is a dedicated set of search attributes that you can use to find issues that contain time tracking data. These attributes look for specific values that have been added as work items to an issue.\n\nwork\n\nwork: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or phrase in a work item.\n\nwork author: \u003cuser\u003e\n\nReturns issues that have work items that were added by the specified user.\n\nwork type\n\nwork type: \u003cvalue\u003e\n\nReturns issues that have work items that are assigned the specified work type. The query work type: {No type} returns issues that have work items that are not assigned a work item type.\n\nwork date\n\nwork date: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that have work items that are recorded for the specified date or within the specified time frame.\n\ncustom work item attributes\n\nwork \u003cattribute name\u003e: \u003cattribute value\u003e\n\nReturns issues that have work items that are assigned the specified value for a specific work item attribute.\n\nSort Attributes\n\nYou can specify the sort order for the list of issues that are returned by the search query.\n\nYou can sort issues by any of the attributes on the following list. In the Search Query Grammar, these attributes represent the \u003cSortAttribute\u003e value.\n\nsort by\n\nsort by: \u003cvalue\u003e \u003csort order\u003e\n\nSorts issues that are returned by the query in the specified order.\n\nWhen you perform a text search, the results can be sorted by relevance. You cannot specify relevance as a sort attribute. For more information, see Sorting by Relevance.\n\nKeywords\n\nThere are a number of values that can be substituted with a keyword. When you use a keyword in a search query, you do not specify an attribute. A keyword is preceded by the number sign (#) or the minus operator. In the YouTrack Search Query Grammar, these keywords correspond to a \u003cSingleValue\u003e.\n\nme\n\nReferences the current user. This keyword can be used as a value for any attribute that accepts a user.\n\nWhen used as a single value (#me) the search returns issues that are assigned to, reported by, or commented by the current user.\n\nFor example, to find unresolved issues that are assigned to, reported by, or contain comments from the current user, enter #me -Resolved.\n\nThe results also include issues that contain references to the current user in any custom field that stores values as users. For example, you have a custom field Reviewed by that stores a user type. The search query #me -Resolved also includes issues that reference the current user in this custom field.\n\nmy\n\nAn alias for me.\n\nResolved\n\nThis keyword references the Resolved issue property. This property is set based on the current value or combination of values for any custom field that stores a state type. In the default State field, the Resolved property is enabled for the values Fixed, Won't fix, Duplicate, Incomplete, Obsolete, and Can't reproduce.\n\nFor projects that use multiple state-type fields, the Resolved property is only true when all the state-type fields are assigned values that are considered to be resolved.\n\nFor example, to find all resolved issues that were updated today, enter #Resolved updated: Today.\n\nUnresolved\n\nThis keyword references the Unresolved issue property. This property is set based on the current value or combination of values for any custom field that stores a state type. In the default State field, the Resolved property is disabled for the values Submitted, Open, In Progress, Reopened, and To be discussed.\n\nFor projects that use multiple state-type fields, the Unresolved property is true when any state-type field is assigned a value that is not considered to be resolved.\n\nFor example, to find all unresolved issues that are assigned to the user john.doe in the Test project, enter #Unresolved project: Test for: john.doe.\n\nReleased\n\nThis keyword references the Released property for values in a field that stores a version type. It can only be used together with the attribute name or alias for a version field. This means that it cannot be referenced as a single value.\n\nWith fields that store multiple values, the search query returns issues for which at least one of the versions that are stored in the field is marked as released.\n\nFor example, to find all issues in the Test project that are fixed in a version that has not yet been released, enter in: Test fixed in: -Released.\n\nArchived\n\nThis keyword references the Archived property for values in a field that stores a version type. It can only be used together with the attribute name or alias for a version field. This means that it cannot be referenced as a single value.\n\nWith fields that store multiple values, the search query only returns issues for which all the versions that are stored in the field are marked as archived.\n\nFor example, to find all issues in the Test project that are fixed in a version that has been archived, enter in: Test fixed in: Archived.\n\nOperators\n\nThe search query grammar applies default semantics to search queries that do not contain explicit logical operators.\n\nSearches that specify values for multiple attributes are treated as conjunctive. This means that the values are handled as if joined by an AND operator. For example, State: {In Progress} Priority: Critical returns issues that are assigned the specified state and priority.\n\nThis extends to queries that look for the presence or absence of a value for a specific attribute (has) in combination with a reverence to a specific value for the same attribute. The presence or absence of a value and the value itself are considered as separate attributes in the issue. For example, has: assignee Assignee: me only returns issues where the assignee is set and that assignee is you.\n\nFor text search, searches that include multiple words are treated as conjunctive. This means that the words are handled as if joined by an AND operator. For example, State: Open context usage returns issues that contain matching forms for both context and usage.\n\nSearches that include multiple values for a single attribute are treated as disjunctive. This means that the values are handled as if joined by an OR operator. For example, State: {In Progress}, {To be discussed} returns issues that are assigned either one or the other of these two states.\n\nYou can override the default semantics by applying explicit operators to the query.\n\nand\n\nThe AND operator combines matches for multiple search attributes to narrow down the search results. When you join search arguments with the AND operator, the resulting issues must contain matches for all the specified attributes. Use this operator for issue fields that store enum[*] types and tags.\n\nSearch arguments that are joined with an AND operator are always processed as a group and have a higher priority than other arguments that are joined with an OR operator in the query.\n\nHere are a few examples of search queries that contain AND operators:\n\nTo find issues in the Ktor project that are tagged as both Next build and to be tested, enter:\n\nin: Ktor and tag: {Next build} and tag: {to be tested}\n\nThe AND operator between the two tags ensures that the results only contain issues that have both tags.\n\nTo find all issues that are set as Critical priority in the Ktor project or are set as Major priority and are assigned to you in the Kotlin project, enter:\n\nin: Ktor #Critical or in: Kotlin #Major and for: me\n\nIf you were to remove the operators in this query, the references to the project and priority are parsed as disjunctive (OR) statements. The reference to the assignee (me) is then joined with a conjunctive (AND) statement. Instead of getting critical issues in the Ktor project plus a list of major-priority issues that you are assigned in Kotlin, you would only issues that are assigned to you that are either major or critical in either Ktor or Kotlin.\n\nor\n\nThe OR operator combines matches for multiple search attribute to broaden the search results.\n\nThis is very useful when searching for a term which has a synonym that might be used in an issue instead. For example, a search for lesson OR tutorial returns issues that contain matching forms for either \"lesson\" or \"tutorial\". If you remove the OR operator from the query, the search is performed conjunctively, which means the result would only include issues that contain matching forms for both words.\n\nHere's another example of a search query that contains an OR operator:\n\nTo find all issues in the Ktor project that are assigned to you or are tagged as to be tested in any project, enter:\n\nin: Ktor for: me or tag: {to be tested}\n\nParentheses\n\nUsing parentheses ( and ) combines various search arguments to change the order in which the attributes and operators are processed. The part of a search query inside the parentheses has priority and is always processed as a single unit.\n\nThe most common use of parentheses is to enclose two search arguments that are separated by an OR operator and further restrict the search results by joining additional search arguments with AND operators.\n\nAny time you use parentheses in a search query, you need to provide all the operators that join the parenthetical statement to neighboring search arguments. For example, the search query in: Kotlin #Critical (in: Ktor and for:me) cannot be processed. It must be written as in: Kotlin #Critical or (in: Ktor and for:me) instead.\n\nHere's an example of a search query that uses parentheses:\n\nTo find all issues that are assigned to you and are either assigned Critical priority in the Kotlin project or are assigned Major priority in the Ktor project, enter:\n\n(in: Kotlin #Critical or in: Ktor #Major) and for: me\n\nSymbols\n\nThe following symbols can be used to extend or refine a search query.\n\nSymbol\n\nDescription\n\nExamples\n\n-\n\nExcludes a subset from a set of search query results. When you use this symbol with a single value, do not use the number sign.\n\nTo find all unresolved issues except for issues with minor priority and sort the list of results by priority in ascending order, enter #unresolved -minor sort by: priority asc.\n\n#\n\nIndicates that the input represents a single value.\n\nTo find all unresolved issues in the MRK project that were reported by, assigned to, or commented by the current user, enter #my #unresolved in: MRK.\n\n,\n\nSeparates a list of values for a single attribute. Can be used in combination with a range.\n\nTo find all issues assigned to, reported or commented by the current user, which were created today or yesterday, enter #my created: Today, Yesterday.\n\n..\n\nDefines a range of values. Insert this symbol between the values that define the upper and lower ranges. The search results include the upper and lower bounds.\n\nTo find all issues fixed in version 1.2.1 and in all versions from 1.3 to 1.5, enter fixed in: 1.2.1, 1.3 .. 1.5.\n\nTo find all issues created between March 10 and March 13, 2018, enter created: 2018-03-10 .. 2018-03-13.\n\n*\n\nWildcard character. Its behavior is context-dependent.\n\nWhen used with the .. symbol, substitutes a value that determines the upper or lower bound in a range search. The search results are inclusive of the specified bound.\n\nWhen used in an attribute-based search, matches zero or more characters at the end of an attribute value. For more information, see Wildcards in Attribute-based Search.\n\nWhen used in text search, matches zero or more characters in a string. For more information, see Wildcards in Text Search.\n\nTo find all issues created on or before March 10, 2018, enter created: * .. 2018-03-10\n\nTo find issues that have tags that start with refactoring, enter tag: refactoring*.\n\nTo find unresolved issues that contain image attachments in PNG format, enter #Unresolved attachments: *.png.\n\n?\n\nMatches any single character in a string. You can only use this wildcard to search in attributes that store text. For more information, see Wildcards in Text Search.\n\nTo find issues that contain the words \"prioritize\" or \"prioritise\" in the issue description, enter description: prioriti?e\n\n{ }\n\nEncloses attribute values that contain spaces.\n\nTo find all issues with the Fixed state that have the tag to be tested, enter #Fixed tag: {to be tested}.\n\nDate and Period Values\n\nSeveral search attributes reference values that are stored as a date. You can search for dates as single values or use a range of values to define a period.\n\nSpecify dates in the format: YYYY-MM-DD or YYYY-MM or MM-DD. You also can specify a time in 24h format: HH:MM:SS or HH:MM. To specify both date and time, use the format: YYYY-MM-DD}}T{{HH:MM:SS. For example, the search query created: 2010-01-01T12:00 .. 2010-01-01T15:00 returns all issues that were created on 1 January 2010 between 12:00 and 15:00.\n\nPredefined Relative Date Parameters\n\nYou can also use pre-defined relative parameters to search for date values. The values for these parameters are calculated relative to the current date according to the time zone of the current user. The actual value for each parameter is shown in the query assist panel.\n\nThe following relative date parameters are supported:\n\nParameter\n\nDescription\n\nNow\n\nThe current instant.\n\nToday\n\nThe current calendar day.\n\nTomorrow\n\nThe next calendar day.\n\nYesterday\n\nThe previous calendar day.\n\nSunday\n\nThe calendar Sunday for the current week.\n\nMonday\n\nThe calendar Monday for the current week.\n\nTuesday\n\nThe calendar Tuesday for the current week.\n\nWednesday\n\nThe calendar Wednesday for the current week.\n\nThursday\n\nThe calendar Thursday for the current week.\n\nFriday\n\nThe calendar Friday for the current week.\n\nSaturday\n\nThe calendar Saturday for the current week.\n\n{Last working day}\n\nThe most recent working day as defined by the Workdays that are configured in the settings on the Time Tracking page in YouTrack.\n\n{This week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the current week.\n\n{Last week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the previous week.\n\n{Next week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the next week.\n\n{Two weeks ago}\n\nThe period from 00:00 Monday to 23:59 Sunday for the calendar week two weeks prior to the current date.\n\n{Three weeks ago}\n\nThe period from 00:00 Monday to 23:59 Sunday for the calendar week three weeks prior to the current date.\n\n{This month}\n\nThe period from the first day to the last day of the current calendar month.\n\n{Last month}\n\nThe period from the first day to the last day of the previous calendar month.\n\n{Next month}\n\nThe period from the first day to the last day of the next calendar month.\n\nOlder\n\nThe period from 1 January 1970 to the last day of the month two months prior to the current date.\n\nCustom Date Parameters\n\nIf the predefined date parameters don't help you find issues that matter most to you, define your own date range in your search query. Here are a few examples of the queries you can write with custom date parameters:\n\nFind issues that have new comments added in the last seven days:\n\ncommented: {minus 7d} .. Today\n\nFind issues that were updated in the last two hours:\n\nupdated: {minus 2h} .. *\n\nFind unresolved issues that are at least one and a half years old:\n\ncreated: * .. {minus 1y 6M} #Unresolved\n\nFind issues that are due in five days:\n\nDue Date: {plus 5d}\n\nTo define a custom time frame in your search queries, use the following syntax:\n\nTo specify dates or times in the past, use minus.\n\nTo specify dates or times in the future, use plus.\n\nSpecify the time frame as a series of whole numbers followed by a letter that represents the unit of time. Separate each unit of time with a space character. For example:\n\n2y 3M 1w 2d 12h\n\nQueries that specify hours will filter for events that took place during the specified hour. For example, if it is currently 15:35, a query that is written as created: {minus 48h} returns issues that were created two days ago, at any time between 3 and 4 PM. Meanwhile, a query that is written as created: {minus 2d} returns all issues that were created two days ago at any time between midnight and 23:59.\n\nThis level of precision only applies to hours. A query that references the unit of time as 14d returns exactly the same results as 2w.\n\nSearch queries that specify units of time shorter than one hour (minutes, seconds) are not supported.\n\nSearch Query Grammar\n\nThis page provides a BNF description of the YouTrack search query grammar.\n\n\u003cSearchRequest\u003e ::= \u003cOrExpression\u003e \u003cOrExpession\u003e ::= \u003cAndExpression\u003e ('or' \u003cAndExpression\u003e)* \u003cAndExpression\u003e ::= \u003cAndOperand\u003e ('and' \u003cAndOperand\u003e)* \u003cAndOperand\u003e ::= '('\u003cOrExpression\u003e? ')' | Term \u003cTerm\u003e ::= \u003cTermItem\u003e* \u003cTermItem\u003e ::= \u003cQuotedText\u003e | \u003cNegativeText\u003e | \u003cPositiveSingleValue\u003e | \u003cNegativeSingleValue\u003e | \u003cSort\u003e | \u003cHas\u003e | \u003cCategorizedFilter\u003e | \u003cText\u003e \u003cCategorizedFilter\u003e ::= \u003cAttribute\u003e ':' \u003cAttributeFilter\u003e (',' \u003cAttributeFilter\u003e)* \u003cAttribute\u003e ::= \u003cname of issue field\u003e \u003cAttributeFilter\u003e ::= ('-'? \u003cValue\u003e ) | ('-'? \u003cValueRange\u003e) | \u003cLinkedIssuesQuery\u003e \u003cLinkedIssuesQuery\u003e ::= ( \u003cOrExpression\u003e ) \u003cValueRange\u003e ::= \u003cValue\u003e '..' \u003cValue\u003e \u003cPositiveSingleValue\u003e ::= '#'\u003cSingleValue\u003e \u003cNegativeSingleValue\u003e ::= '-'\u003cSingleValue\u003e \u003cSingleValue\u003e ::= \u003cValue\u003e \u003cSort\u003e ::= 'sort by:' \u003cSortField\u003e (',' \u003cSortField\u003e)* \u003cSortField\u003e ::= \u003cSortAttribute\u003e ('asc' | 'desc')? \u003cHas\u003e ::= 'has:' \u003cAttribute\u003e (',' \u003cAttribute\u003e)* \u003cQuotedText\u003e ::= '\"' \u003ctext without quotes\u003e '\"' \u003cNegativeText\u003e ::= '-' \u003cQuotedText\u003e \u003cText\u003e ::= \u003ctext without parentheses\u003e \u003cValue\u003e ::= \u003cComplexValue\u003e | \u003cSimpleValue\u003e \u003cSimpleValue\u003e ::= \u003cvalue without spaces\u003e \u003cComplexValue\u003e ::= '{' \u003cvalue (can have spaces)\u003e '}'\n\nGrammar is case-insensitive.\n\nFor a complete list of search attributes, see Issue Attributes.\n\nTo see sample queries for common use cases, see Sample Search Queries.\n\n11 November 2025",
    "link": "https://www.jetbrains.com/help/youtrack/cloud/search-and-command-attributes.html",
    "snippet": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and ...",
    "title": "Search Query Reference | YouTrack Cloud Documentation - JetBrains"
  },
  {
    "content_readable": "Introduced in 2020, the GitHub user profile README allow individuals to give a long-form introduction. This multi-part tutorial explains how I setup my own profile to create dynamic content to aid discovery of my projects:\n\nwith the Liquid template engine and Shields (Part 1 of 4)\nusing GitHub's GraphQL API to query dynamic data about all my repos (keep reading below)\nfetching RSS and Social cards from third-party sites (Part 3 of 4)\nautomating updates with GitHub Actions (Part 4 of 4)\n\nYou can visit github.com/j12y to see the final result of what I came up with for my own profile page.\n\nThe GitHub Repo Gallery\n\nThe intended behavior for my repo gallery is to create something similar to pinned repositories but with a bit more visual pizzazz to identify what the projects are about.\n\nIn addition to source code, the repo can have metadata associated with it:\n\n✔️ Name of the repository\n✔️ Short description of the project\n✔️ Programming language used for the project\n✔️ List of tags / topics\n✔️ Image that can be used for social cards\n\nAbout\n\nThe About has editable fields to set the description and topics.\n\nSettings\n\nThe Settings includes a place to upload an image for social media preview cards.\n\nIf you don't set a preview card image, GitHub will generate one automatically that includes some basic profile statistics and your user profile image.\n\nGetting Started with the GitHub REST API\n\nThe way I structured this project is to build a library of any functions related to querying GitHub in src/gh.ts. I used a .env file to store my personal access (classic) token for authentication during local development.\n\n├── package.json\n├── .env\n├── src\n│   ├── app.ts\n│   ├── gh.ts\n│   └── template\n│       ├── README.liquid\n│       ├── contact.liquid\n│       └── gallery.liquid\n└── tsconfig.json\n\n\nI started by using REST endpoints with the Octokit library and TypeScript bindings.\n\n// src/gh.ts\nimport { Octokit } from 'octokit';\nimport { RestEndpointMethodTypes } from '@octokit/plugin-rest-endpoint-methods'\nconst octokit = new Octokit({ auth: process.env.TOKEN});\n\nexport class GitHub {\n    // GET /users/{user}\n    // https://docs.github.com/en/rest/users/users#get-a-user\n    async getUserDetails(user: string): Promise\u003cRestEndpointMethodTypes['users']['getByUsername']['response']['data']\u003e {\n        const { data } = await octokit.rest.users.getByUsername({\n            username: user\n        });\n\n        return data;\n    };\n}\n\n\nFrom src/app.ts I initialize the GithHub class, fetch the results, and can inspect the data being returned as a way to get comfortable with the various endpoints.\n\n// src/app.ts\nimport dotenv from 'dotenv';\nimport { GitHub } from \"./gh\";\n\nexport async function main() {\n  dotenv.config();\n  const gh = new GitHub()\n\n  const details = await gh.getUserDetails();\n  console.log(details);\n}\nmain();\n\n\nI typically get started on projects with simple tests like this to make sure all the various pieces to an integration can be configured and work together before getting too far.\n\nUse the GitHub GraphQL Endpoint\n\nTo get the data needed for the gallery layout, it would be necessary to make multiple calls to REST endpoints. In addition there is some data not yet available from the REST endpoint at all.\n\nSwitching to query using the GitHub GraphQL interface becomes helpful. This single endpoint can process a number of queries and give precise control over the data needed.\n\n💡 The GitHub GraphQL Explorer was fundamentally useful for me to get the right queries defined\n\nThis query needs authorization with the personal access token to fetch profile details about followers similar to some of the details returned from the REST endpoints.\n\n// src/gh.ts\n\nconst { graphql } = require(\"@octokit/graphql\")\n\nexport class GitHub \n    // https://docs.github.com/en/graphql\n    graphqlWithAuth = graphql.defaults({\n        headers: {\n            authorization: `token ${process.env.TOKEN}`\n        }\n    })\n\n    async getProfileOverview(name: string): Promise\u003cany\u003e {\n        const query = `\n            query getProfileOverview($name: String!) { \n                user(login: $name) { \n                    followers(first: 100) {\n                        totalCount\n                        edges {\n                            node {\n                                login\n                                name\n                                twitterUsername\n                                email\n                            }\n                        }\n                    }\n                }\n            }\n        `;\n        const params = {'name': name};\n\n        return await this.graphqlWithAuth(query, params);\n    }\n}\n\n\nThere are other resources such as Learn GraphQL if you haven't written many queries yet which explains the basics around syntax, schemas, and types.\n\nGetting used to GitHub's GraphQL schema primarily involves walking a series of edges to find linked nodes for objects of interest and their data attributes. In this case, I started by querying a user profile, finding the list of linked followers, and then inspecting their corresponding node's login, name, and email address.\n\n   ┌────────────┐\n   │    user    │\n   └─────┬──────┘\n         │\n         └──followers\n               │\n               ├─── totalCount\n               │\n               └─── edges\n                     │\n                     └── node\n\n\n\nFaceted Search by Topic Frequency\n\nI often want to find repositories by a topic. The user interface makes it easy to filter among many repositories by programming language such as python but unless you know which topics are relevant can become hit or miss. Was it nlp or nltk I used to categorize related repositories. Did I use dolby or dolbyio to identify repos I have for work projects.\n\nA faceted search that narrows down the number of matching repositories can be helpful for finding relevant projects like this. Given topics on GitHub are open-ended and not constrained to fixed values, it can be easy to accidentally categorize repos with variations like lambda and aws-lambda such that searches only identify partial results.\n\nTo address this, a GraphQL query gathering topics by frequency of usage within an organization or individual account can help with identifying the most useful topics.\n\nThe steps for this would be:\n\nQuery repository topics\nProcess results to group topics by frequency\nUse a template to render the gallery\n\n1 - Query Repository Topics\n\nI used the following GraphQL query to fetch my repositories and their corresponding topics.\n\nconst query = `\n    query getReposOverview($name: String!) {\n        user(login: $name) {\n            repositories(first: 100 ownerAffiliations: OWNER) {\n                edges {\n                    node {\n                        name\n                        url\n                        description\n                        openGraphImageUrl\n                        repositoryTopics(first: 100) {\n                            edges {\n                                node {\n                                    topic {\n                                        name\n                                    }\n                                }\n                            }\n                        }\n                        primaryLanguage {\n                            name\n                        }\n                    }\n                }\n            }\n        }\n    }\n`;\n\n\nThis query starts by filtering by user owned repositories (not counting forks) along with the metadata such as the social image.\n\n2 - Process Results and Group Topics by Frequency\n\nIterating over the results of the query the convention used was to look for anything with the topic github-gallery as something to be featured in the gallery. We also get a count of usage for each of the other topics and programming languages.\n\nvar topics: {[id: string]: number } = {};\nvar languages: {[id: string]: number } = {};\nvar gallery: {[id: string]: any } = {};\n\nconst repos = await gh.getReposOverview(user);\nfor (let repo of repos.user.repositories.edges) {\n  // Count occurrences of each topic\n  repo.node.repositoryTopics.edges.forEach((topic: any) =\u003e {\n    if (topic.node.topic.name == 'github-gallery') {\n      gallery[repo.node.name] = repo;\n    } else {\n      topics[topic.node.topic.name] = topic.node.topic.name in topics ? topics[topic.node.topic.name] + 1 : 1;\n    }\n  });\n\n  // Count and include count of language used\n  if (repo.node.primaryLanguage) {\n    languages[repo.node.primaryLanguage.name] = repo.node.primaryLanguage.name in languages ? languages[repo.node.primaryLanguage.name] + 1 : 1;\n  }\n}\n\n\n3 - Use a template to render the gallery\n\nThe topics are ordered by how often they are used. From the previous post on setting up a dynamic profile, I'm passing scope to the liquid engine for any data to be made available in a template.\n\n  // Share topics sorted by frequency of use for filtering repositories\n  // from the organization\n  scope['topics'] = Object.entries(topics).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n  scope['languages'] = Object.entries(languages).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n\n  // Gather topics across repos\n  scope['gallery'] = Object.values(gallery);\n\n\n\nThe repository page on GitHub uses query parameters to sort and filter, so items like topic:nltk can be passed directly in the URL to load a filtered view of repositories. The shields create a nice looking button for navigating to the topic, and use of icons for programming languages helps find relevant code samples.\n\n\u003cp\u003eExplore some of my projects: \u003cbr/\u003e\n{% for language in languages %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=language%3A{{language[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ language[0] }}-{{ language[1] }}-lightgrey?logo={{ language[0] }}\u0026label={{ language[0] }}\u0026labelColor=000000\" alt=\"{{ language[0] }}\"/\u003e\u003c/a\u003e {% endfor %}\n{% for topic in topics %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/static/v1?label={{topic[0]}}\u0026message={{ topic[1] }}\u0026labelColor=blue\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\n\nThe presentation includes a 3-column row in a table for displaying the metadata about each featured gallery project. This could display all repositories, but limiting to one or two rows seems sensible for managing screen space.\n\n{% for tile in gallery limit:3 %}\n\u003ctd width=\"25%\" valign=\"top\" style=\"padding-top: 20px; padding-bottom: 20px; padding-left: 30px; padding-right: 30px;\"\u003e\n\u003ca href=\"{{ tile.node.url }}\"\u003e\u003cimg src=\"{{ tile.node.openGraphImageUrl }}\"/\u003e\u003c/a\u003e\n\u003cp\u003e\u003cb\u003e\u003ca href=\"{{ tile.node.url }}\"\u003e{{ tile.node.name }}\u003c/b\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e{{ tile.node.description }}\u003cbr/\u003e\n{% for topic in tile.node.repositoryTopics.edges %} \u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic.node.topic.name }}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ topic.node.topic.name | replace: \"-\", \"--\" }}-blue?style=pill\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\u003c/td\u003e\n{% endfor %}\n\n\nWith all of that put together, we now have a gallery that displays a picture along with the name, description, and tags. The picture can highlight a user interface, architectural diagram, or some other branded visual to help identify the purpose of the project visually.\n\nWe can also use this to maintain our list of topics and make finding relevant topics for an audience easier to discover.\n\nLearn more\n\nI hope this overview helps with getting yourself sorted. The next article will dive into some of the other ways of aggregating content.\n\nFetching RSS and Social Cards for GitHub Profile (Part 3 of 4)\nAutomating GitHub Profile Updates with Actions (Part 4 of 4)\n\nDid this help you get your own profile started? Let me know and follow to get notified about updates.",
    "link": "https://dev.to/j12y/query-github-repo-topics-using-graphql-35ha",
    "snippet": "Creating a customized user profile page for GitHub to showcase work projects and make navigation to relevant topics easier.",
    "title": "Query GitHub Repo Topics Using GraphQL - DEV Community"
  },
  {
    "content_readable": "Updated\n\n4 days ago\n\nWith millions of conversations happening all over the web each day, it can be a long and tedious task trying to get more relevant mentions and tighten the scope of your query, but with the help of Advanced Topic Query, it can be at your fingertips.\n\nIn Social Listening, you have the option to create an advanced query that is not limited to ANY, ALL, or NONE formatting of query building. Advanced query builder can be used to form complex text queries which are not possible with a normal query builder.\n\nWhat is an Advanced Topic Query?\n\nAdvanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more.\n\nBy using advanced query you can pinpoint relevant information which is not possible with basic topic query.\n\nIt gives you the power to find the needle in a haystack.\n\n​\n\nBasic Topic Query v/s Advanced Topic Query\n\nWith more operators to use you can fetch conversations by language, geography, social media channel, volume, author, #listening, @account monitoring, user segment, and much more, it can give you access to more actionable insights.\n\nIn Basic Query, you can only use boolean operators like OR/ NOT/ AND/ along with NEAR. On the other hand, in Advanced Topic Query, it gives you access to use OR with/ inside AND, NOT (nested and within operator use cases), advanced operators, exact match operators etc.\n\nLet's see the use cases where advanced query will help in getting more insightful mentions –\n\nUse case #1: To search \"pepsi\" OR \"drink\" along with \"cups\".\n\nBasic Query\n\nAdvancd Query\n\nUse case #2: To get mentions of \"pepsi\" along with \"coke\" or \"sprite\" but not \"miranda\" with people having \"follower count\" between 100 to 1000 on \"twitter\".\n\nBasic Query\n\nAdvanced Query\n\nNot feasible in the basic Topic query\n\nThis is where we need the advanced Topic query.​\n\nHow to create an advanced Topic query?\n\nClick the New Tab icon. Under Sprinklr Insights, Click Topics within Listening.\n\nOn the Topics window, click Add Topic in the top right corner. Fill in the required fields and click Create.\n\nIn the Setup Query tab of Create New Topic window, select Advanced Query in the query section.\n\n​\n\nType your query in the Advanced Query field with the required operators and syntax.\n\nClick Save.\n\nTip: While using Instagram as a Listening Source, be sure that your query keywords include hashtags.\n\nWhich operators to use for building Topic queries?\n\nOperators for Topic queries\n\nIn creation of advanced queries along with boolean operators OR/ AND/ NOT/ etc, Sprinklr also supports operator types –\n\nSearch Operators\n\nExact Match Operators\n\nOperators for Getting Post Replies/Comments​\n\nSprinklr provides its user edge by giving them power to use Keywords List inside advanced query along with Operators mentioned.\n\nCreate query using Topic query operators\n\nFollowing are some most used operator examples and their results –\n\nOperator\n\nExample\n\nResult\n\nhello\n\nSearch for the term \"hello\"\n\nsocial sprinklr\n\nSearch for the phrases \"social\" and \"sprinklr\"\n\n​\n\nNote: Using this will show preview but topic can not be saved as it will show error, Use \"Social Sprinklr\" or (Social AND/OR/ NOT/ NEAR Sprinklr) to eliminate error.\n\nAND\n\nsocial AND sprinklr\n\nSearch for \"social\" and \"sprinklr\" anywhere within the complete message, irrespective of keywords between them\n\nOR\n\nsocial OR sprinklr\n\nSearch for \"social\" or \"sprinklr\"\n\nNOT\n\n\"social media\" NOT \"facebook\"\n\nSearch for results that contain \"social media\" but not \"facebook\"\n\n~\n\n\"social media\"~10\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNEAR\n\nsocial NEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNote: This operator can be used with keyword lists.\n\nONEAR\n\nsocial ONEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other in an ordered way\n\nNote: This operator searches social ahead of media.\n\ntitle\n\ntitle: (\"social media\")\n\nSearch for social media in the title of the message\n\nNote: It is mostly used for News, blogs, reviews and other sites.\n\nauthor\n\nauthor: \"social_media\"\n\nFetches all the mentions from author name: social_media\n\nSome other operators which are supported by Sprinklr are –\n\nProximity: It is used to define proximity or distance between 2 keywords only, whereas, NEAR can be used to define proximity between two keywords as well as keyword lists.\n\nOnear (Ordered Near): It sets the order in which the keywords will appear. For example, Keyword-List1 ONEAR/10 Keyword-List2 will ensure keywords from Keyword-List1 appear first and then Keyword-List2 keywords will follow within space of maximum 10.\n\nStep by step guide to make advanced Topic query\n\nUse case\n\nTo write query fetching mentions of ZARA –\n\n​\n\n(# listening is used for instagram listening)\n\nGetting mention along with clothing or fashion related terms only –\n\nRemoving profanity from mention (use case specific) –\n\nRemoving profanity from mention (use case specific) –\n\nAs social media has lots of profane words you can also remove it by making a keyword list and negating it from query –\n\nFiltering Mentions in English –\n\n​\n\nApplying source input as Twitter –\n\nGetting mentions of those users which have followers between 100 to 1000 –\n\n​\n\nAdvanced example showcasing use of Topic query operators and keyword list –\n\nBest practices while using Advanced Query\n\nUse of Parentheses\n\n​Parentheses are not necessary to enclose a search query but can be useful while grouping operations together for more complex queries.\n\n​\n\nFor example, if you want to return results that mention Samsung or Apple phones, and also want to query content that mentions phones along with either Apple or Samsung, you could use parentheses around Apple and Samsung to group three keywords together, as shown below –\n\nphone AND (Apple OR Samsung)\n\n​\n\nUse of parentheses within brackets, is further explained below with an example –\n\n[(internet of things ~3) OR iot OR internetofthings) AND (robots OR robot OR #robot)] NOT [things]\n\nTip: You can also use parentheses within brackets to set off additional operations within the Advanced Query field. The end result should look similar to the result summary of a basic query, built using multiple operations within a single section.\n\n\nAs a part of the rest of the query, this will perform the following operations –\n\nSearch for posts that contain the phrase \"internet of things\" or \"#internetofthings\"\n\nFrom within those results, keep any result that also says \"robots\" or \"robot\" or \"#robot\" within three words (a proximity search) of either \"internet of things\" or \"iot\" or \"internetofthings\".\n\nDiscard any results that just have the phrase \"things\" within.\n\nParentheses nested within brackets intend to set off different operations as isolated processes. In the previous example, if you build an Advanced Query that states [(internet of things OR iot OR internet of things) AND (robots OR robot OR #robot)] your query will return results that contain ANY of the first three terms and the second three terms.\n\nHowever, if you build an Advanced Query that states [internet of things OR iot OR internet of things AND robots OR robot OR #robot], your query will return any result that contains the phrase \"internet of things\" or the word \"iot\" or the word \"robot\" or the hashtag #robot or specifically the phrase \"internet of things\" within the same message as the word \"robots\".\n\nNote:\n\nYou cannot use a \"NOT\" statement with an \"OR\" statement.\n\n\nExample:\n( social OR NOT media ) ❌\n( social NOT media ) ✅\n\n(( social OR ( media NOT facebook )) ✅\n\nWhy?\n\nQuery should not contain \"NOT\" terms in \"OR\" with other terms, \"NOT\" clauses should be used in \"AND\" with other terms, using \"NOT\" in \"OR\" will bring too much data.\n\nUse of Quotation marks\n\nQuotation marks can be used for phrases in which you are looking for an exact match of those particular words in a specific order. Using parentheses or quotation marks for single-word queries is not mandatory.\n\nUse straight quotation marks ( \" \" ) for outlining phrases within it. The use of curved quotation marks (“ ”) will not produce your desired results.\n\nParentheses are generally used to group keywords or phrases joined by one or more operators together, but with other keywords involved, parentheses and quotations would act differently. For example –\n\nVersion 1: \"Phil Schiller\" AND \"Apple Marketing\" will return results for content with the exact phrase Phil Schiller (or phil schiller) and the exact phrase Apple Marketing (or apple marketing).\n\nNote: Here exact does not mean case sensitive as in the case of exactMessage Operator.\n\nExample: exactMessage: (\"Phil Schiller\" AND \"Apple Marketing\"), which will fetch results for phrase Phil Schiller (not phil schiller) and the exact phrase Apple Marketing (not apple marketing).\n\n\nVersion 2: \"Phil Schiller\" AND (Apple OR Marketing) will return results for content with the phrase \"Phil Schiller\" (together) and at least one of the words, Apple or Marketing.\n\nHandling for Broad \u0026 Ambiguous Keywords\n\nIt is very important to not use/reduce use of broad keywords in advanced queries. Broader keywords will fetch mentions that are unrelated to topic of interest, and eventually hinder dashboard/insights\n\nFor all keywords used in an advanced topic query, ensure they are directly related to the topic of interest.\n\nIn case keywords are broad but relevant to topic, they should be tied to some relevant keywords related to that topic, by using NEAR Operators\n\nExample: Robot is an important keyword for Robot Company. However just using this keyword will fetch irrelevant keywords as it’s a broad keyword used for other entities as well (Robot Street, etc).\n\nInstead of using just Robot keyword, we should use: Robot NEAR/4 (Technology OR “machine” OR # tech OR IOT OR “Internet of things” ….)\n\nNote how keywords related to Robot are used with NEAR Operator. Related keywords could be related entities, industry keywords, parent company, country keywords, etc.\n\nFrequently asked questions\n\n​\n\nIs it compulsory to put quotation marks around phrases like \"apple music\" or can we use apple music directly?\n\nHow can I eliminate posts with many spam #’s or @’s?\n\nCan exact match or parent operators be used in advanced query?\n\nWhy am I able to see mentions in preview during making of topic but not in dashboard?\n\nDuring listening to @ mentions a lot of spam mentions are also getting tagged along, e.g. like wanting to get mentions of @tom but messages of @tom_fan56 are also coming. How to remove these irrelevant mentions?\n\nIf I write query as “tom” will it also fetch mentions such as tom_jerry / @tom / #tom ?\n\n​",
    "link": "https://www.sprinklr.com/help/articles/faqs-and-advanced-usecases/create-an-advanced-topic-query/646331628ea3c9635cf36711",
    "snippet": "Advanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more. By ...",
    "title": "‎Create an Advanced Topic Query | Sprinklr Help Center"
  },
  {
    "content_readable": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL). To learn about the query language used by Resource Graph, start with the tutorial for KQL.\n\nThis article covers the language components supported by Resource Graph:\n\nUnderstanding the Azure Resource Graph query language\n\nResource Graph tables\nExtended properties\nResource Graph custom language elements\n\nShared query syntax (preview)\nSupported KQL language elements\n\nSupported tabular/top level operators\nQuery scope\nEscape characters\nNext steps\n\nResource Graph tables\n\nResource Graph provides several tables for the data it stores about Azure Resource Manager resource types and their properties. Resource Graph tables can be used with the join operator to get properties from related resource types.\n\nResource Graph tables support the join flavors:\n\ninnerunique\ninner\nleftouter\nfullouter\n\nResource Graph table Can join other tables? Description\nAdvisorResources Yes Includes resources related to Microsoft.Advisor.\nAlertsManagementResources Yes Includes resources related to Microsoft.AlertsManagement.\nAppServiceResources Yes Includes resources related to Microsoft.Web.\nAuthorizationResources Yes Includes resources related to Microsoft.Authorization.\nAWSResources Yes Includes resources related to Microsoft.AwsConnector.\nAzureBusinessContinuityResources Yes Includes resources related to Microsoft.AzureBusinessContinuity.\nChaosResources Yes Includes resources related to Microsoft.Chaos.\nCommunityGalleryResources Yes Includes resources related to Microsoft.Compute.\nComputeResources Yes Includes resources related to Microsoft.Compute Virtual Machine Scale Sets.\nDesktopVirtualizationResources Yes Includes resources related to Microsoft.DesktopVirtualization.\nDnsResources Yes Includes resources related to Microsoft.Network.\nEdgeOrderResources Yes Includes resources related to Microsoft.EdgeOrder.\nElasticsanResources Yes Includes resources related to Microsoft.ElasticSan.\nExtendedLocationResources Yes Includes resources related to Microsoft.ExtendedLocation.\nFeatureResources Yes Includes resources related to Microsoft.Features.\nGuestConfigurationResources Yes Includes resources related to Microsoft.GuestConfiguration.\nHealthResourceChanges Yes Includes resources related to Microsoft.Resources.\nHealthResources Yes Includes resources related to Microsoft.ResourceHealth.\nInsightsResources Yes Includes resources related to Microsoft.Insights.\nIoTSecurityResources Yes Includes resources related to Microsoft.IoTSecurity and Microsoft.IoTFirmwareDefense.\nKubernetesConfigurationResources Yes Includes resources related to Microsoft.KubernetesConfiguration.\nKustoResources Yes Includes resources related to Microsoft.Kusto.\nMaintenanceResources Yes Includes resources related to Microsoft.Maintenance.\nManagedServicesResources Yes Includes resources related to Microsoft.ManagedServices.\nMigrateResources Yes Includes resources related to Microsoft.OffAzure.\nNetworkResources Yes Includes resources related to Microsoft.Network.\nPatchAssessmentResources Yes Includes resources related to Azure Virtual Machines patch assessment Microsoft.Compute and Microsoft.HybridCompute.\nPatchInstallationResources Yes Includes resources related to Azure Virtual Machines patch installation Microsoft.Compute and Microsoft.HybridCompute.\nPolicyResources Yes Includes resources related to Microsoft.PolicyInsights.\nRecoveryServicesResources Yes Includes resources related to Microsoft.DataProtection and Microsoft.RecoveryServices.\nResourceChanges Yes Includes resources related to Microsoft.Resources.\nResourceContainerChanges Yes Includes resources related to Microsoft.Resources.\nResourceContainers Yes Includes management group (Microsoft.Management/managementGroups), subscription (Microsoft.Resources/subscriptions) and resource group (Microsoft.Resources/subscriptions/resourcegroups) resource types and data.\nResources Yes The default table if a table isn't defined in the query. Most Resource Manager resource types and properties are here.\nSecurityResources Yes Includes resources related to Microsoft.Security.\nServiceFabricResources Yes Includes resources related to Microsoft.ServiceFabric.\nServiceHealthResources Yes Includes resources related to Microsoft.ResourceHealth/events.\nSpotResources Yes Includes resources related to Microsoft.Compute.\nSupportResources Yes Includes resources related to Microsoft.Support.\nTagsResources Yes Includes resources related to Microsoft.Resources/tagnamespaces.\n\nFor a list of tables that includes resource types, go to Azure Resource Graph table and resource type reference.\n\nNote\n\nResources is the default table. While querying the Resources table, it isn't required to provide the table name unless join or union are used. But the recommended practice is to always include the initial table in the query.\n\nTo discover which resource types are available in each table, use Resource Graph Explorer in the portal. As an alternative, use a query such as \u003ctableName\u003e | distinct type to get a list of resource types the given Resource Graph table supports that exist in your environment.\n\nThe following query shows a simple join. The query result blends the columns together and any duplicate column names from the joined table, ResourceContainers in this example, are appended with 1. As ResourceContainers table has types for both subscriptions and resource groups, either type might be used to join to the resource from Resources table.\n\nResources\n| join ResourceContainers on subscriptionId\n| limit 1\n\n\nThe following query shows a more complex use of join. First, the query uses project to get the fields from Resources for the Azure Key Vault vaults resource type. The next step uses join to merge the results with ResourceContainers where the type is a subscription on a property that is both in the first table's project and the joined table's project. The field rename avoids join adding it as name1 since the property already is projected from Resources. The query result is a single key vault displaying type, the name, location, and resource group of the key vault, along with the name of the subscription it's in.\n\nResources\n| where type == 'microsoft.keyvault/vaults'\n| project name, type, location, subscriptionId, resourceGroup\n| join (ResourceContainers | where type=='microsoft.resources/subscriptions' | project SubName=name, subscriptionId) on subscriptionId\n| project type, name, location, resourceGroup, SubName\n| limit 1\n\n\nNote\n\nWhen limiting the join results with project, the property used by join to relate the two tables, subscriptionId in the above example, must be included in project.\n\nExtended properties\n\nAs a preview feature, some of the resource types in Resource Graph have more type-related properties available to query beyond the properties provided by Azure Resource Manager. This set of values, known as extended properties, exists on a supported resource type in properties.extended. To show resource types with extended properties, use the following query:\n\nResources\n| where isnotnull(properties.extended)\n| distinct type\n| order by type asc\n\n\nExample: Get count of virtual machines by instanceView.powerState.code:\n\nResources\n| where type == 'microsoft.compute/virtualmachines'\n| summarize count() by tostring(properties.extended.instanceView.powerState.code)\n\n\nResource Graph custom language elements\n\nShared query syntax (preview)\n\nAs a preview feature, a shared query can be accessed directly in a Resource Graph query. This scenario makes it possible to create standard queries as shared queries and reuse them. To call a shared query inside a Resource Graph query, use the {{shared-query-uri}} syntax. The URI of the shared query is the Resource ID of the shared query on the Settings page for that query. In this example, our shared query URI is /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS. This URI points to the subscription, resource group, and full name of the shared query we want to reference in another query. This query is the same as the one created in Tutorial: Create and share a query.\n\nNote\n\nYou can't save a query that references a shared query as a shared query.\n\nExample 1: Use only the shared query:\n\nThe results of this Resource Graph query are the same as the query stored in the shared query.\n\n{{/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS}}\n\n\nExample 2: Include the shared query as part of a larger query:\n\nThis query first uses the shared query, and then uses limit to further restrict the results.\n\n{{/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS}}\n| where properties_storageProfile_osDisk_osType =~ 'Windows'\n\n\nSupported KQL language elements\n\nResource Graph supports a subset of KQL data types, scalar functions, scalar operators, and aggregation functions. Specific tabular operators are supported by Resource Graph, some of which have different behaviors.\n\nSupported tabular/top level operators\n\nHere's the list of KQL tabular operators supported by Resource Graph with specific samples:\n\nKQL Resource Graph sample query Notes\ncount Count key vaults\ndistinct Show resources that contain storage\nextend Count virtual machines by OS type\njoin Key vault with subscription name Join flavors supported: innerunique, inner, leftouter, and fullouter. Limit of three join or union operations (or a combination of the two) in a single query, counted together, one of which might be a cross-table join. If all cross-table join use is between Resource and ResourceContainers, then three cross-table join are allowed. Custom join strategies, such as broadcast join, aren't allowed. For which tables can use join, go to Resource Graph tables.\nlimit List all public IP addresses Synonym of take. Doesn't work with Skip.\nmvexpand Legacy operator, use mv-expand instead. RowLimit max of 2,000. The default is 128.\nmv-expand List Azure Cosmos DB with specific write locations RowLimit max of 2,000. The default is 128. Limit of 3 mv-expand in a single query.\norder List resources sorted by name Synonym of sort\nparse Get virtual networks and subnets of network interfaces It's optimal to access properties directly if they exist instead of using parse.\nproject List resources sorted by name\nproject-away Remove columns from results\nsort List resources sorted by name Synonym of order\nsummarize Count Azure resources Simplified first page only\ntake List all public IP addresses Synonym of limit. Doesn't work with Skip.\ntop Show first five virtual machines by name and their OS type\nunion Combine results from two queries into a single result Single table allowed: | union [kind= inner|outer] [withsource=ColumnName] Table. Limit of three union legs in a single query. Fuzzy resolution of union leg tables isn't allowed. Might be used within a single table or between the Resources and ResourceContainers tables.\nwhere Show resources that contain storage\n\nThere's a default limit of three join and three mv-expand operators in a single Resource Graph SDK query. You can request an increase in these limits for your tenant through Help + support.\n\nTo support the Open Query portal experience, Azure Resource Graph Explorer has a higher global limit than Resource Graph SDK.\n\nNote\n\nYou can't reference a table as right table multiple times, which exceeds the limit of 1. If you do so, you would receive an error with code DisallowedMaxNumberOfRemoteTables.\n\nQuery scope\n\nThe scope of the subscriptions or management groups from which resources are returned by a query defaults to a list of subscriptions based on the context of the authorized user. If a management group or a subscription list isn't defined, the query scope is all resources, and includes Azure Lighthouse delegated resources.\n\nThe list of subscriptions or management groups to query can be manually defined to change the scope of the results. For example, the REST API managementGroups property takes the management group ID, which is different from the name of the management group. When managementGroups is specified, resources from the first 10,000 subscriptions in or under the specified management group hierarchy are included. managementGroups can't be used at the same time as subscriptions.\n\nExample: Query all resources within the hierarchy of the management group named My Management Group with ID myMG.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-03-01\n\n\nRequest Body\n\n{\n  \"query\": \"Resources | summarize count()\",\n  \"managementGroups\": [\"myMG\"]\n}\n\n\nThe AuthorizationScopeFilter parameter enables you to list Azure Policy assignments and Azure role-based access control (Azure RBAC) role assignments in the AuthorizationResources table that are inherited from upper scopes. The AuthorizationScopeFilter parameter accepts the following values for the PolicyResources and AuthorizationResources tables:\n\nAtScopeAndBelow (default if not specified): Returns assignments for the given scope and all child scopes.\nAtScopeAndAbove: Returns assignments for the given scope and all parent scopes, but not child scopes.\nAtScopeAboveAndBelow: Returns assignments for the given scope, all parent scopes, and all child scopes.\nAtScopeExact: Returns assignments only for the given scope; no parent or child scopes are included.\n\nNote\n\nTo use the AuthorizationScopeFilter parameter, be sure to use the 2021-06-01-preview or later API version in your requests.\n\nExample: Get all policy assignments at the myMG management group and Tenant Root (parent) scopes.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-06-01-preview\n\n\nRequest Body Sample\n\n{\n  \"options\": {\n    \"authorizationScopeFilter\": \"AtScopeAndAbove\"\n  },\n  \"query\": \"PolicyResources | where type =~ 'Microsoft.Authorization/PolicyAssignments'\",\n  \"managementGroups\": [\"myMG\"]\n}\n\n\nExample: Get all policy assignments at the mySubscriptionId subscription, management group, and Tenant Root scopes.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-06-01-preview\n\n\nRequest Body Sample\n\n{\n  \"options\": {\n    \"authorizationScopeFilter\": \"AtScopeAndAbove\"\n  },\n  \"query\": \"PolicyResources | where type =~ 'Microsoft.Authorization/PolicyAssignments'\",\n  \"subscriptions\": [\"mySubscriptionId\"]\n}\n\n\nEscape characters\n\nSome property names, such as those that include a . or $, must be wrapped or escaped in the query or the property name is interpreted incorrectly and doesn't provide the expected results.\n\nDot (.): Wrap the property name ['propertyname.withaperiod'] using brackets.\n\nExample query that wraps the property odata.type:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.['odata.type']\n\n\nDollar sign ($): Escape the character in the property name. The escape character used depends on the shell that runs Resource Graph.\n\nBash: Use a backslash (\\) as the escape character.\n\nExample query that escapes the property $type in Bash:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.\\$type\n\n\ncmd: Don't escape the dollar sign ($) character.\n\nPowerShell: Use a backtick (`) as the escape character.\n\nExample query that escapes the property $type in PowerShell:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.`$type\n\n\nNext steps\n\nAzure Resource Graph query language Starter queries and Advanced queries.\nLearn more about how to explore Azure resources.",
    "link": "https://learn.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language",
    "snippet": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL).",
    "title": "Understanding the Azure Resource Graph query language - Microsoft"
  }
]
Output (3895 bytes)
[
  {
    "text": "# Headline\nX (formerly Twitter) has moved to a pay-as-you-go / pay-per-use API model in 2026 while keeping fixed tiers available; this significantly reduces free access and forces developers to compare consumption-based costs vs. fixed plans and third‑party alternatives.\n\n1) Most relevant sources\n\n1. **Announcing the Launch of X API Pay-Per-Use Pricing** — https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476\n   - X’s official developer announcement confirms the launch of a new Pay‑Per‑Use pricing model (snippet: “We’re thrilled to officially announce the launch of our new X API Pay‑Per‑Use pricing model ... February 13, 2026”), signaling the platform’s move to consumption billing alongside existing tiers.\n\n2. **Announcing the X API Pay-Per-Use Pricing Pilot** — https://devcommunity.x.com/t/announcing-the-x-api-pay-per-use-pricing-pilot/250253\n   - The pilot post lists concrete per‑unit pilot rates (example in snippet: “Post (Read): $0.005 per Post fetched. User (Read): $0.01 per User fetched. DM Event (Read): $0.01 per DM Event fetched”), giving a practical example of how consumption costs may be applied in pay‑per‑use billing.\n\n3. **How to Get X API Key: Complete 2026 Guide to Pricing ... (Elfsight)** — https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/\n   - A comprehensive 2026 guide that explains the API evolution, current tiers (notes Basic ≈ $200/mo as practical minimum), the November 2025 pay‑per‑use beta (developers received $500 vouchers), authentication methods, rate limits, and five optimization strategies to reduce costs.\n\n4. **X API Pricing in 2026: Every Tier Explained (WeAreFounders)** — https://www.wearefounders.uk/the-x-api-price-hike-a-blow-to-indie-hackers/\n   - Analysis of the Feb 6, 2026 announcement: pay‑as‑you‑go is available but fixed tiers remain; the article details tier breakpoints (Free, Basic $200, Pro $5,000, Enterprise $42k+) and argues pay‑as‑you‑go helps sporadic users but may not be cheaper for steady high usage.\n\n5. **Top Twitter/X Data API Providers Compared (Netrows)** — https://netrows.com/blog/top-twitter-x-data-api-providers-2026\n   - A comparison of official vs third‑party X data providers that recommends alternatives (Netrows, RapidAPI, Apify, Brandwatch) for different needs, noting third‑party options can be cheaper but may carry reliability/ToS and historical‑data limitations.\n\n6. **Want to understand the pricing — X API v2 (Developer Community thread)** — https://devcommunity.x.com/t/want-to-understand-the-pricing/256677\n   - A community Q\u0026A where developers attempt to calculate per‑request costs (snippet shows a sample cost calculation and confusion), highlighting that real costs depend on endpoint caps, batching rules, and how X counts “entries” per request.\n\nFinal takeaway and next steps\n\nTakeaway: X’s 2026 shift to pay‑per‑use plus maintained fixed tiers means you must model your actual API call patterns (reads, writes, searches, stream usage) to decide whether a fixed plan or consumption billing is cheaper, and aggressively apply caching, batching, and filtered streams to cut costs. Third‑party data providers can be viable for some workloads but carry tradeoffs in compliance, coverage and freshness.\n\nSuggested next steps / follow-up searches:\n- Check the official X Developer pricing page and the pay‑per‑use rate table and calculator in your developer portal for exact current unit prices and any vouchers. \n- Run a usage-cost simulation (expected reads/writes/searches/month) to compare Basic/Pro vs pay‑per‑use. \n- Search for “X API pay-per-use cost calculator”, “X API rate limits by endpoint 2026”, and “X data providers vs official API 2026” to refine your cost and compliance choices."
  }
]
s5 slack success 2026-03-01 22:54:06 → 2026-03-01 22:54:07
Input (3895 bytes)
[
  {
    "text": "# Headline\nX (formerly Twitter) has moved to a pay-as-you-go / pay-per-use API model in 2026 while keeping fixed tiers available; this significantly reduces free access and forces developers to compare consumption-based costs vs. fixed plans and third‑party alternatives.\n\n1) Most relevant sources\n\n1. **Announcing the Launch of X API Pay-Per-Use Pricing** — https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476\n   - X’s official developer announcement confirms the launch of a new Pay‑Per‑Use pricing model (snippet: “We’re thrilled to officially announce the launch of our new X API Pay‑Per‑Use pricing model ... February 13, 2026”), signaling the platform’s move to consumption billing alongside existing tiers.\n\n2. **Announcing the X API Pay-Per-Use Pricing Pilot** — https://devcommunity.x.com/t/announcing-the-x-api-pay-per-use-pricing-pilot/250253\n   - The pilot post lists concrete per‑unit pilot rates (example in snippet: “Post (Read): $0.005 per Post fetched. User (Read): $0.01 per User fetched. DM Event (Read): $0.01 per DM Event fetched”), giving a practical example of how consumption costs may be applied in pay‑per‑use billing.\n\n3. **How to Get X API Key: Complete 2026 Guide to Pricing ... (Elfsight)** — https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/\n   - A comprehensive 2026 guide that explains the API evolution, current tiers (notes Basic ≈ $200/mo as practical minimum), the November 2025 pay‑per‑use beta (developers received $500 vouchers), authentication methods, rate limits, and five optimization strategies to reduce costs.\n\n4. **X API Pricing in 2026: Every Tier Explained (WeAreFounders)** — https://www.wearefounders.uk/the-x-api-price-hike-a-blow-to-indie-hackers/\n   - Analysis of the Feb 6, 2026 announcement: pay‑as‑you‑go is available but fixed tiers remain; the article details tier breakpoints (Free, Basic $200, Pro $5,000, Enterprise $42k+) and argues pay‑as‑you‑go helps sporadic users but may not be cheaper for steady high usage.\n\n5. **Top Twitter/X Data API Providers Compared (Netrows)** — https://netrows.com/blog/top-twitter-x-data-api-providers-2026\n   - A comparison of official vs third‑party X data providers that recommends alternatives (Netrows, RapidAPI, Apify, Brandwatch) for different needs, noting third‑party options can be cheaper but may carry reliability/ToS and historical‑data limitations.\n\n6. **Want to understand the pricing — X API v2 (Developer Community thread)** — https://devcommunity.x.com/t/want-to-understand-the-pricing/256677\n   - A community Q\u0026A where developers attempt to calculate per‑request costs (snippet shows a sample cost calculation and confusion), highlighting that real costs depend on endpoint caps, batching rules, and how X counts “entries” per request.\n\nFinal takeaway and next steps\n\nTakeaway: X’s 2026 shift to pay‑per‑use plus maintained fixed tiers means you must model your actual API call patterns (reads, writes, searches, stream usage) to decide whether a fixed plan or consumption billing is cheaper, and aggressively apply caching, batching, and filtered streams to cut costs. Third‑party data providers can be viable for some workloads but carry tradeoffs in compliance, coverage and freshness.\n\nSuggested next steps / follow-up searches:\n- Check the official X Developer pricing page and the pay‑per‑use rate table and calculator in your developer portal for exact current unit prices and any vouchers. \n- Run a usage-cost simulation (expected reads/writes/searches/month) to compare Basic/Pro vs pay‑per‑use. \n- Search for “X API pay-per-use cost calculator”, “X API rate limits by endpoint 2026”, and “X data providers vs official API 2026” to refine your cost and compliance choices."
  }
]
Output (3893 bytes)
[
  {
    "ok": true,
    "text": "*Headline*\nX (formerly Twitter) has moved to a pay-as-you-go / pay-per-use API model in 2026 while keeping fixed tiers available; this significantly reduces free access and forces developers to compare consumption-based costs vs. fixed plans and third‑party alternatives.\n\n1) Most relevant sources\n\n1. *Announcing the Launch of X API Pay-Per-Use Pricing* — https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476\n   - X’s official developer announcement confirms the launch of a new Pay‑Per‑Use pricing model (snippet: “We’re thrilled to officially announce the launch of our new X API Pay‑Per‑Use pricing model ... February 13, 2026”), signaling the platform’s move to consumption billing alongside existing tiers.\n\n2. *Announcing the X API Pay-Per-Use Pricing Pilot* — https://devcommunity.x.com/t/announcing-the-x-api-pay-per-use-pricing-pilot/250253\n   - The pilot post lists concrete per‑unit pilot rates (example in snippet: “Post (Read): $0.005 per Post fetched. User (Read): $0.01 per User fetched. DM Event (Read): $0.01 per DM Event fetched”), giving a practical example of how consumption costs may be applied in pay‑per‑use billing.\n\n3. *How to Get X API Key: Complete 2026 Guide to Pricing ... (Elfsight)* — https://elfsight.com/blog/how-to-get-x-twitter-api-key-in-2026/\n   - A comprehensive 2026 guide that explains the API evolution, current tiers (notes Basic ≈ $200/mo as practical minimum), the November 2025 pay‑per‑use beta (developers received $500 vouchers), authentication methods, rate limits, and five optimization strategies to reduce costs.\n\n4. *X API Pricing in 2026: Every Tier Explained (WeAreFounders)* — https://www.wearefounders.uk/the-x-api-price-hike-a-blow-to-indie-hackers/\n   - Analysis of the Feb 6, 2026 announcement: pay‑as‑you‑go is available but fixed tiers remain; the article details tier breakpoints (Free, Basic $200, Pro $5,000, Enterprise $42k+) and argues pay‑as‑you‑go helps sporadic users but may not be cheaper for steady high usage.\n\n5. *Top Twitter/X Data API Providers Compared (Netrows)* — https://netrows.com/blog/top-twitter-x-data-api-providers-2026\n   - A comparison of official vs third‑party X data providers that recommends alternatives (Netrows, RapidAPI, Apify, Brandwatch) for different needs, noting third‑party options can be cheaper but may carry reliability/ToS and historical‑data limitations.\n\n6. *Want to understand the pricing — X API v2 (Developer Community thread)* — https://devcommunity.x.com/t/want-to-understand-the-pricing/256677\n   - A community Q\u0026A where developers attempt to calculate per‑request costs (snippet shows a sample cost calculation and confusion), highlighting that real costs depend on endpoint caps, batching rules, and how X counts “entries” per request.\n\nFinal takeaway and next steps\n\nTakeaway: X’s 2026 shift to pay‑per‑use plus maintained fixed tiers means you must model your actual API call patterns (reads, writes, searches, stream usage) to decide whether a fixed plan or consumption billing is cheaper, and aggressively apply caching, batching, and filtered streams to cut costs. Third‑party data providers can be viable for some workloads but carry tradeoffs in compliance, coverage and freshness.\n\nSuggested next steps / follow-up searches:\n- Check the official X Developer pricing page and the pay‑per‑use rate table and calculator in your developer portal for exact current unit prices and any vouchers. \n- Run a usage-cost simulation (expected reads/writes/searches/month) to compare Basic/Pro vs pay‑per‑use. \n- Search for “X API pay-per-use cost calculator”, “X API rate limits by endpoint 2026”, and “X data providers vs official API 2026” to refine your cost and compliance choices."
  }
]