← research-and-report

Run Detail

180afb75-34bf-4955-970a-cfa8a5896a47
success

Started

2026-03-01 22:37:15

Finished

2026-03-01 22:38:00

Steps

s1 web_search success 2026-03-01 22:37:15 → 2026-03-01 22:37:16
Input (33 bytes)
[
  {
    "query": "twitter api pricing"
  }
]
Output (2341 bytes)
[
  {
    "link": "https://docs.x.com/x-api/getting-started/pricing",
    "snippet": "Credit-based. Purchase credits upfront in the Developer Console. Credits are deducted as you make API requests.",
    "title": "Pricing - X - X Developer Platform"
  },
  {
    "link": "https://docs.x.com/x-api/introduction",
    "snippet": "Pricing. The X API uses pay-per-usage pricing. No subscriptions—pay only for what you use. Flexible scaling. Start small and grow. Costs scale with your ...",
    "title": "Programmatic access to X's posts, users, spaces, and more - X API"
  },
  {
    "link": "https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476",
    "snippet": "Legacy Free API Users: If you've been recently active on our Legacy Free tier, you'll be transitioned to Pay-Per-Use with a one-time $10 voucher ...",
    "title": "Announcing the Launch of X API Pay-Per-Use Pricing"
  },
  {
    "link": "https://www.reddit.com/r/SaaS/comments/1gh6qvj/twitters_pricing_is_ridiculous/",
    "snippet": "\"Additionally, effective July 1, 2025, we will discontinue our existing Enterprise API tiers and introduce a new streamlined v2 API tier ...",
    "title": "Twitter's Pricing is Ridiculous! : r/SaaS - Reddit"
  },
  {
    "link": "https://developer.x.com/",
    "snippet": "Fixed monthly fees ($200 or $5,000) · Tier-dependent rate limits · Monthly caps on objects · Costs jump with tiered plans ...",
    "title": "X Developer Platform - Twitter"
  },
  {
    "link": "https://medium.com/@asaan/twitter-api-changes-navigating-the-end-of-free-access-your-2024-guide-b9f9cf47ea79",
    "snippet": "New Twitter API Pricing Tiers · Basic Tier: $100/month for access to 10,000 tweets per month · Pro Tier: $5,000/month for access to 2 million ...",
    "title": "Twitter API Changes: What You Need to Know About the End of Free ..."
  },
  {
    "link": "https://docs.x.com/x-api/getting-started/about-x-api",
    "snippet": "Pay-per-usage pricing; Modern JSON response format; Flexible fields and expansions; Advanced features: annotations, conversation tracking, edit history ...",
    "title": "About the X API - X Developer Platform"
  },
  {
    "link": "https://www.xpoz.ai/blog/guides/understanding-twitter-api-pricing-tiers-and-alternatives/",
    "snippet": "Twitter API pricing has four tiers: Free ($0 with severe limits), Basic ($100/month for 10K tweets), Pro ($5,000/month for 1M tweets), and ...",
    "title": "Twitter API Pricing 2026: Tiers, Costs \u0026 Alternatives - Xpoz"
  }
]
s2 web_search success 2026-03-01 22:37:16 → 2026-03-01 22:37:17
Input (2341 bytes)
[
  {
    "link": "https://docs.x.com/x-api/getting-started/pricing",
    "snippet": "Credit-based. Purchase credits upfront in the Developer Console. Credits are deducted as you make API requests.",
    "title": "Pricing - X - X Developer Platform"
  },
  {
    "link": "https://docs.x.com/x-api/introduction",
    "snippet": "Pricing. The X API uses pay-per-usage pricing. No subscriptions—pay only for what you use. Flexible scaling. Start small and grow. Costs scale with your ...",
    "title": "Programmatic access to X's posts, users, spaces, and more - X API"
  },
  {
    "link": "https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476",
    "snippet": "Legacy Free API Users: If you've been recently active on our Legacy Free tier, you'll be transitioned to Pay-Per-Use with a one-time $10 voucher ...",
    "title": "Announcing the Launch of X API Pay-Per-Use Pricing"
  },
  {
    "link": "https://www.reddit.com/r/SaaS/comments/1gh6qvj/twitters_pricing_is_ridiculous/",
    "snippet": "\"Additionally, effective July 1, 2025, we will discontinue our existing Enterprise API tiers and introduce a new streamlined v2 API tier ...",
    "title": "Twitter's Pricing is Ridiculous! : r/SaaS - Reddit"
  },
  {
    "link": "https://developer.x.com/",
    "snippet": "Fixed monthly fees ($200 or $5,000) · Tier-dependent rate limits · Monthly caps on objects · Costs jump with tiered plans ...",
    "title": "X Developer Platform - Twitter"
  },
  {
    "link": "https://medium.com/@asaan/twitter-api-changes-navigating-the-end-of-free-access-your-2024-guide-b9f9cf47ea79",
    "snippet": "New Twitter API Pricing Tiers · Basic Tier: $100/month for access to 10,000 tweets per month · Pro Tier: $5,000/month for access to 2 million ...",
    "title": "Twitter API Changes: What You Need to Know About the End of Free ..."
  },
  {
    "link": "https://docs.x.com/x-api/getting-started/about-x-api",
    "snippet": "Pay-per-usage pricing; Modern JSON response format; Flexible fields and expansions; Advanced features: annotations, conversation tracking, edit history ...",
    "title": "About the X API - X Developer Platform"
  },
  {
    "link": "https://www.xpoz.ai/blog/guides/understanding-twitter-api-pricing-tiers-and-alternatives/",
    "snippet": "Twitter API pricing has four tiers: Free ($0 with severe limits), Basic ($100/month for 10K tweets), Pro ($5,000/month for 1M tweets), and ...",
    "title": "Twitter API Pricing 2026: Tiers, Costs \u0026 Alternatives - Xpoz"
  }
]
Output (1541 bytes)
[
  {
    "link": "https://haystack.deepset.ai/blog/query-decomposition",
    "snippet": "This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach.",
    "title": "Advanced RAG: Query Decomposition \u0026 Reasoning - Haystack"
  },
  {
    "link": "https://www.sprinklr.com/help/articles/faqs-and-advanced-usecases/create-an-advanced-topic-query/646331628ea3c9635cf36711",
    "snippet": "Advanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more. By ...",
    "title": "‎Create an Advanced Topic Query | Sprinklr Help Center"
  },
  {
    "link": "https://discuss.logseq.com/t/query-todos-on-current-page/1481",
    "snippet": "I'm struggling to figure out how to write a query to get all the todos on a particular page. I thought there might be a generic way to do this for any page.",
    "title": "Query Todos on Current Page - Questions \u0026 Help - Logseq"
  },
  {
    "link": "https://dev.to/j12y/query-github-repo-topics-using-graphql-35ha",
    "snippet": "Creating a customized user profile page for GitHub to showcase work projects and make navigation to relevant topics easier.",
    "title": "Query GitHub Repo Topics Using GraphQL - DEV Community"
  },
  {
    "link": "https://community.weweb.io/t/query-parameters-in-url/6074",
    "snippet": "I'm trying to set up a filter that generates query parameters in the url so that way I can send a QR code that links to a specific search or users can share a ...",
    "title": "Query parameters in url - Ask us anything - WeWeb Community"
  }
]
s3 fetch_content success 2026-03-01 22:37:17 → 2026-03-01 22:37:31
Input (3881 bytes)
[
  {
    "link": "https://docs.x.com/x-api/getting-started/pricing",
    "snippet": "Credit-based. Purchase credits upfront in the Developer Console. Credits are deducted as you make API requests.",
    "title": "Pricing - X - X Developer Platform"
  },
  {
    "link": "https://docs.x.com/x-api/introduction",
    "snippet": "Pricing. The X API uses pay-per-usage pricing. No subscriptions—pay only for what you use. Flexible scaling. Start small and grow. Costs scale with your ...",
    "title": "Programmatic access to X's posts, users, spaces, and more - X API"
  },
  {
    "link": "https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476",
    "snippet": "Legacy Free API Users: If you've been recently active on our Legacy Free tier, you'll be transitioned to Pay-Per-Use with a one-time $10 voucher ...",
    "title": "Announcing the Launch of X API Pay-Per-Use Pricing"
  },
  {
    "link": "https://www.reddit.com/r/SaaS/comments/1gh6qvj/twitters_pricing_is_ridiculous/",
    "snippet": "\"Additionally, effective July 1, 2025, we will discontinue our existing Enterprise API tiers and introduce a new streamlined v2 API tier ...",
    "title": "Twitter's Pricing is Ridiculous! : r/SaaS - Reddit"
  },
  {
    "link": "https://developer.x.com/",
    "snippet": "Fixed monthly fees ($200 or $5,000) · Tier-dependent rate limits · Monthly caps on objects · Costs jump with tiered plans ...",
    "title": "X Developer Platform - Twitter"
  },
  {
    "link": "https://medium.com/@asaan/twitter-api-changes-navigating-the-end-of-free-access-your-2024-guide-b9f9cf47ea79",
    "snippet": "New Twitter API Pricing Tiers · Basic Tier: $100/month for access to 10,000 tweets per month · Pro Tier: $5,000/month for access to 2 million ...",
    "title": "Twitter API Changes: What You Need to Know About the End of Free ..."
  },
  {
    "link": "https://docs.x.com/x-api/getting-started/about-x-api",
    "snippet": "Pay-per-usage pricing; Modern JSON response format; Flexible fields and expansions; Advanced features: annotations, conversation tracking, edit history ...",
    "title": "About the X API - X Developer Platform"
  },
  {
    "link": "https://www.xpoz.ai/blog/guides/understanding-twitter-api-pricing-tiers-and-alternatives/",
    "snippet": "Twitter API pricing has four tiers: Free ($0 with severe limits), Basic ($100/month for 10K tweets), Pro ($5,000/month for 1M tweets), and ...",
    "title": "Twitter API Pricing 2026: Tiers, Costs \u0026 Alternatives - Xpoz"
  },
  {
    "link": "https://haystack.deepset.ai/blog/query-decomposition",
    "snippet": "This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach.",
    "title": "Advanced RAG: Query Decomposition \u0026 Reasoning - Haystack"
  },
  {
    "link": "https://www.sprinklr.com/help/articles/faqs-and-advanced-usecases/create-an-advanced-topic-query/646331628ea3c9635cf36711",
    "snippet": "Advanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more. By ...",
    "title": "‎Create an Advanced Topic Query | Sprinklr Help Center"
  },
  {
    "link": "https://discuss.logseq.com/t/query-todos-on-current-page/1481",
    "snippet": "I'm struggling to figure out how to write a query to get all the todos on a particular page. I thought there might be a generic way to do this for any page.",
    "title": "Query Todos on Current Page - Questions \u0026 Help - Logseq"
  },
  {
    "link": "https://dev.to/j12y/query-github-repo-topics-using-graphql-35ha",
    "snippet": "Creating a customized user profile page for GitHub to showcase work projects and make navigation to relevant topics easier.",
    "title": "Query GitHub Repo Topics Using GraphQL - DEV Community"
  },
  {
    "link": "https://community.weweb.io/t/query-parameters-in-url/6074",
    "snippet": "I'm trying to set up a filter that generates query parameters in the url so that way I can send a QR code that links to a specific search or users can share a ...",
    "title": "Query parameters in url - Ask us anything - WeWeb Community"
  }
]
Output (73516 bytes)
[
  {
    "content_readable": "The X API uses pay-per-usage pricing. No subscriptions—pay only for what you use.\n\nHow it works\n\nCredit-based\n\nPurchase credits upfront in the Developer Console. Credits are deducted as you make API requests.\n\nPer-endpoint pricing\n\nDifferent endpoints have different costs. View current rates in the Developer Console.\n\nNo commitments\n\nNo contracts, subscriptions, or minimum spend. Start and stop anytime.\n\nReal-time tracking\n\nMonitor usage and costs live in the Developer Console.\n\nEarn free xAI API credits when you purchase X API credits—up to 20% back based on your spend. Learn more\n\nIf you are on a legacy subscription package (Basic or Pro), you can opt in to Pay-per-use pricing directly from the Developer Console. If you’d like to switch back to your legacy plan at any time, you can do so from the settings page within the Developer Console.\n\nDeduplication\n\nAll resources are deduplicated within a 24-hour UTC day window. If you request and are charged for a resource (such as a Post), requesting the same resource again within that window will not incur an additional charge. This means:\n\nRequesting the same Post multiple times in a day counts as one charge\nThe deduplication window resets at midnight UTC\nThis applies to all billable resources (Posts, users, etc.)\n\nDeduplication is a soft guarantee. While it occurs in the vast majority of cases, there may be specific edge cases like service outages that result in resources not being deduplicated.\n\nCredit balance\n\nYour credit balance is displayed in the Developer Console. Credits are deducted in real-time as you make API requests.\n\nMonitor your credit balance regularly to avoid service interruptions. Add credits before your balance reaches zero to ensure uninterrupted API access.Note: It is possible for an account credit balance to go slightly negative. In this case, API requests will be blocked until you add credits to cover the negative balance.\n\nAuto-recharge\n\nEnable auto-recharge to automatically top up your credit balance and avoid service interruptions. Configure this in the Developer Console:\n\nSetting\tDescription\nRecharge amount\tThe amount to add when auto-recharge triggers (e.g., $25)\nTrigger threshold\tAuto-recharge activates when your balance falls below this amount (e.g., $5)\n\nAuto-recharge requires a saved payment method set as your default. You can cancel anytime in the Developer Console or by contacting support.\n\nSpending limits\n\nSet a maximum amount you can spend per billing cycle to control costs. When the limit is reached, API requests will be blocked until the next billing cycle.\n\nOption\tDescription\nSpending limit\tSet a specific dollar amount as your maximum spend per billing cycle\n\nUse spending limits to prevent unexpected charges, especially during development and testing.\n\nFree xAI API Credits\n\nWhen you purchase X API credits, you can earn free xAI API credits based on your cumulative spend during a billing cycle.\n\nTo receive free xAI credits, you must link your xAI team to your X developer account. You can do this by visiting your account settings in the developer console.\n\nHow it works\n\nYour cumulative spend is tracked throughout each billing cycle. As you cross spending thresholds, you unlock higher reward rates. When a new billing cycle starts, your cumulative spend resets to $0.\n\nCumulative spend\tRate\n$0 – $199\t0%\n$200 – $499\t10%\n$500 – $999\t15%\n$1,000+\t20%\n\nThe rate applies to your entire cumulative balance, but you only receive the delta—what’s newly owed minus what was already credited.\n\nExample\n\nSuppose you make several purchases throughout a billing cycle:\n\nPurchase\tRate\tTotal owed\tAlready credited\tYou receive\n$100\t0%\t$0\t$0\t$0\n$100\t10%\t$20\t$0\t$20\n$150\t10%\t$35\t$20\t$15\n$150\t15%\t$75\t$35\t$40\n$250\t15%\t$112.50\t$75\t$37.50\n$250\t20%\t$200\t$112.50\t$87.50\n$1,000\t$200\n\nThis is the same amount you’d receive from a single $1,000 purchase—the order and size of purchases doesn’t affect your total rewards.\n\nMonitoring usage\n\nTrack your API usage programmatically with the Usage endpoint:\n\ncurl \"https://api.x.com/2/usage/tweets\" \\\n  -H \"Authorization: Bearer $BEARER_TOKEN\"\n\n\nThis returns daily Post consumption counts, helping you:\n\nTrack consumption against your budget\nSet up alerts when approaching limits\nIdentify high-consumption endpoints\nGenerate usage reports\n\nEnterprise pricing\n\nFor high-volume access with dedicated support, custom rate limits, and additional features, contact our enterprise sales team.\n\nPay-per-usage plans are subject to a monthly cap of 2 million Post reads. If you need higher volume, consider an Enterprise plan.\n\nNext steps",
    "link": "https://docs.x.com/x-api/getting-started/pricing",
    "snippet": "Credit-based. Purchase credits upfront in the Developer Console. Credits are deducted as you make API requests.",
    "title": "Pricing - X - X Developer Platform"
  },
  {
    "content_readable": "The X API gives you programmatic access to X’s public conversation. Read posts, publish content, manage users, and analyze trends—all through modern REST endpoints with flexible pay-per-usage pricing.\n\nWhat you can build\n\nPricing\n\nThe X API uses pay-per-usage pricing. No subscriptions—pay only for what you use.\n\nFlexible scaling\n\nStart small and grow. Costs scale with your actual usage.\n\nNo commitments\n\nNo contracts or minimum spend. Stop anytime.\n\nReal-time tracking\n\nMonitor usage and costs live in the Developer Console.\n\nCredit-based\n\nPurchase credits upfront. Deducted as you use the API.\n\nEarn free xAI API credits when you purchase X API credits—up to 20% back based on your spend. Learn more\n\nPay-per-usage plans are subject to a monthly cap of 2 million Post reads. If you need higher volume, consider an Enterprise plan.\n\nKey features\n\nData access\n\nNear real-time streaming\n\nSearch \u0026 analytics\n\nRich data objects\n\nAccess detailed, structured data for posts, users, media, and more:\n\nPosts: Full text, metrics, entities, annotations, conversation threads\nUsers: Profiles, follower counts, verification status\nMedia: Images, videos, GIFs with metadata\nPolls: Options and vote counts\n\nCustomize responses with fields and expansions to get exactly the data you need.\n\nFiltered stream\n\nGet posts delivered in near real-time as they’re published. Define up to 1,000 filtering rules to receive only matching posts.\n\n# Add a rule\ncurl -X POST \"https://api.x.com/2/tweets/search/stream/rules\" \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -d '{\"add\": [{\"value\": \"from:xdevelopers\"}]}'\n\n# Connect to stream\ncurl \"https://api.x.com/2/tweets/search/stream\" \\\n  -H \"Authorization: Bearer $TOKEN\"\n\n\nLearn more about filtered stream →\n\nFull-archive search\n\nSearch the complete history of public posts—back to 2006. Build queries with operators for users, keywords, dates, and more.\n\ncurl \"https://api.x.com/2/tweets/search/all?query=AI%20lang:en\" \\\n  -H \"Authorization: Bearer $TOKEN\"\n\n\nMetrics\n\nAccess engagement metrics including impressions, likes, reposts, replies, and video views.Learn more about search →\n\nAPI versions\n\nVersion\tStatus\tDescription\nv2\tCurrent\tModern endpoints, flexible pricing, all new features\nEnterprise\tAvailable\tHigh-volume access with dedicated support\n\nUse X API v2 for all new projects. It’s where all new features and improvements are released.\n\nQuick start\n\n1\n\nCreate a developer account\n\n2\n\nGet your credentials\n\nGenerate your Bearer Token for app-only requests.\n\n3\n\nMake a request\n\nTry looking up a user:\n\ncurl \"https://api.x.com/2/users/by/username/xdevelopers\" \\\n  -H \"Authorization: Bearer $BEARER_TOKEN\"\n\n\nTools \u0026 libraries\n\nBrowse all libraries →\n\nSupport",
    "link": "https://docs.x.com/x-api/introduction",
    "snippet": "Pricing. The X API uses pay-per-usage pricing. No subscriptions—pay only for what you use. Flexible scaling. Start small and grow. Costs scale with your ...",
    "title": "Programmatic access to X's posts, users, spaces, and more - X API"
  },
  {
    "content_readable": "Crawler is not allowed!",
    "link": "https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476",
    "snippet": "Legacy Free API Users: If you've been recently active on our Legacy Free tier, you'll be transitioned to Pay-Per-Use with a one-time $10 voucher ...",
    "title": "Announcing the Launch of X API Pay-Per-Use Pricing"
  },
  {
    "content_readable": "whoa there, pardner!\n\nReddit's awesome and all, but you may have a bit of a problem. We've seen far too many requests come from your IP address recently.\n\nPlease wait a few minutes and try again.\n\nIf you're still getting this error after a few minutes and think that we've incorrectly blocked you or you would like to discuss easier ways to get the data you want, please contact us at this email address.\n\nYou can read Reddit's Terms of Service here.\n\nWhen contacting us, please include your Reddit account along with the following code:\n\n019cab55-93c1-71c4-976e-25cbe0c9b67d",
    "link": "https://www.reddit.com/r/SaaS/comments/1gh6qvj/twitters_pricing_is_ridiculous/",
    "snippet": "\"Additionally, effective July 1, 2025, we will discontinue our existing Enterprise API tiers and introduce a new streamlined v2 API tier ...",
    "title": "Twitter's Pricing is Ridiculous! : r/SaaS - Reddit"
  },
  {
    "content_readable": "",
    "link": "https://developer.x.com/",
    "snippet": "Fixed monthly fees ($200 or $5,000) · Tier-dependent rate limits · Monthly caps on objects · Costs jump with tiered plans ...",
    "title": "X Developer Platform - Twitter"
  },
  {
    "content_readable": "",
    "link": "https://medium.com/@asaan/twitter-api-changes-navigating-the-end-of-free-access-your-2024-guide-b9f9cf47ea79",
    "snippet": "New Twitter API Pricing Tiers · Basic Tier: $100/month for access to 10,000 tweets per month · Pro Tier: $5,000/month for access to 2 million ...",
    "title": "Twitter API Changes: What You Need to Know About the End of Free ..."
  },
  {
    "content_readable": "The X API provides programmatic access to X’s public conversation. Retrieve posts, analyze trends, build integrations, and create new experiences on the platform.\n\nWhat you can do\n\nCapability\tDescription\nRead posts\tSearch, look up, and stream posts in real-time\nPublish content\tCreate posts, replies, and threads\nManage users\tLook up users, manage follows, blocks, and mutes\nAnalyze data\tAccess metrics, trends, and engagement analytics\nBuild integrations\tSend DMs, manage lists, and interact with Spaces\n\nAPI versions\n\nX API v2 (Recommended)\n\nX API v1.1 (Legacy)\n\nEnterprise\n\nThe current version of the X API with modern features and flexible pricing.Why use v2:\n\nPay-per-usage pricing\nModern JSON response format\nFlexible fields and expansions\nAdvanced features: annotations, conversation tracking, edit history\nAll new endpoints and features\n\nGetting started:\n\nSign up at console.x.com\nCreate an app and get credentials\nMake your first request\n\nThe previous version of the X API. Limited support; use v2 for new projects.Still available:\n\nSome media upload endpoints\nLegacy streaming (deprecated)\nSome specialized endpoints\n\nMigrating to v2: See the migration guide for endpoint mapping and data format changes.\n\nHigh-volume access for businesses with advanced needs.Features:\n\nComplete firehose access\nHistorical data backfill\nDedicated support\nCustom rate limits\nCompliance streams\n\nContact enterprise sales →\n\nAvailable resources\n\nThe X API provides access to these resource types:\n\nPosts\n\nSearch, retrieve, create, and delete posts. Access timelines, threads, and quote posts.\n\nUsers\n\nLook up profiles, manage relationships, and access follower data.\n\nSpaces\n\nDiscover live audio conversations and participants.\n\nDirect Messages\n\nSend and receive private messages between users.\n\nLists\n\nCreate and manage curated lists of accounts.\n\nTrends\n\nAccess trending topics by location.\n\nv2 highlights\n\nFields and expansions\n\nRequest only the data you need. Use fields parameters to select specific attributes and expansions to include related objects.\n\ncurl \"https://api.x.com/2/tweets/123?tweet.fields=created_at,public_metrics\u0026expansions=author_id\u0026user.fields=username\" \\\n  -H \"Authorization: Bearer $TOKEN\"\n\n\nLearn more about fields →\n\nPost annotations\n\nPosts include semantic annotations identifying people, places, products, and topics. Filter streams and searches by topic.Learn more about annotations →\n\nEngagement metrics\n\nAccess public metrics (likes, reposts, replies) and private metrics (impressions, clicks) for your own posts.Learn more about metrics →\n\nConversation tracking\n\nEdit history\n\nAccess the edit history of posts, including all previous versions and edit metadata.Learn more about edit posts →\n\nPricing\n\nX API v2 uses pay-per-usage pricing:\n\nBenefit\tDescription\nNo subscriptions\tPay only for what you use\nCredit-based\tPurchase credits, deducted per request\nReal-time tracking\tMonitor usage in the Developer Console\nDeduplication\tSame resource requested twice in 24 hours is only charged once\n\nPay-per-usage plans are subject to a monthly cap of 2 million Post reads. If you need higher volume, consider an Enterprise plan.\n\nView pricing details →\n\nNext steps",
    "link": "https://docs.x.com/x-api/getting-started/about-x-api",
    "snippet": "Pay-per-usage pricing; Modern JSON response format; Flexible fields and expansions; Advanced features: annotations, conversation tracking, edit history ...",
    "title": "About the X API - X Developer Platform"
  },
  {
    "content_readable": "Understanding Twitter API Pricing Tiers and Alternatives\n\nTwitter's API pricing can be confusing. Between the free tier's harsh limitations, the Basic tier's constraints, and the jump to $5,000/month for Pro access, many developers struggle to find the right fit for their needs.\n\nThis guide breaks down exactly what you get at each Twitter API pricing tier, helps you calculate your actual costs, and explains when alternative approaches might serve you better.\n\nCurrent Twitter API Pricing Structure (2026)\n\nTwitter (now X) offers four primary API access tiers:\n\nFree Tier - $0/month\n\nWhat You Get:\n\n1 App ID\nTweet posting (up to 1,500 tweets/month)\n~1 request per 15 minutes for tweet retrieval\nUser lookup (basic)\nNo search functionality\n\nRate Limits:\n\nTweets: ~1/15 min read, 1,500/month write\nUsers: 100 requests/24 hours\n\nBest For:\n\nPersonal bots that post content\nTesting API connectivity\nHobby projects with minimal data needs\n\nNot Suitable For:\n\nAny meaningful data analysis\nBrand monitoring\nResearch projects\nProduction applications\n\nBasic Tier - $100/month\n\nWhat You Get:\n\n2 App IDs\n10,000 tweets read/month\nTweet posting (3,000 tweets/month)\n7-day search history only\nUser lookup and follows\nBasic search operators\n\nRate Limits:\n\nTweets: 10,000/month read\nSearch: 60 requests/15 min, 7-day limit\nUsers: 500 requests/24 hours\n\nBest For:\n\nSmall-scale monitoring projects\nSimple integrations\nLow-volume applications\n\nLimitations:\n\n7-day search limit kills historical analysis\n10K tweets runs out quickly for active monitoring\nNo streaming access\n\nPro Tier - $5,000/month\n\nWhat You Get:\n\n3 App IDs\n1,000,000 tweets/month\nFull archive search (all historical tweets)\nStreaming access\nAdvanced search operators\nHigher rate limits\n\nRate Limits:\n\nTweets: 1M/month\nSearch: 300 requests/15 min, full archive\nStreaming: 50 concurrent connections\n\nBest For:\n\nProfessional applications\nResearch at scale\nCompanies with dedicated social analytics\nApplications requiring historical data\n\nThe Problem: $5,000/month is a 50x jump from Basic with no intermediate option.\n\nEnterprise Tier - $42,000+/month\n\nWhat You Get:\n\nCustom limits based on negotiation\nDedicated account management\nHigher reliability SLAs\nPriority support\nCompliance features\n\nPricing:\n\nStarts at ~$42,000/month\nCustom quotes based on volume\nAnnual contracts typical\n\nBest For:\n\nLarge enterprises\nMission-critical applications\nHigh-volume data needs\nCompanies requiring compliance documentation\n\nCalculating Your Actual Costs\n\nHow to Estimate Tweet Volume\n\nBefore choosing a tier, estimate your monthly tweet needs:\n\nBrand Monitoring:\n\nSmall brand: 1,000-5,000 mentions/month\nMedium brand: 5,000-50,000 mentions/month\nLarge brand: 50,000-500,000+ mentions/month\n\nCompetitor Analysis:\n\nPer competitor: 500-5,000 tweets/month\n5 competitors: 2,500-25,000 tweets/month\n\nKeyword Tracking:\n\nNiche keyword: 100-1,000 tweets/day\nPopular keyword: 1,000-100,000+ tweets/day\n\nResearch Projects:\n\nThesis-level: 10,000-100,000 tweets\nLarge-scale study: 1M+ tweets\n\nCost Per Tweet Breakdown\n\nTier\tMonthly Cost\tTweets Included\tCost per 1K Tweets\nFree\t$0\t~100\tN/A\nBasic\t$100\t10,000\t$10.00\nPro\t$5,000\t1,000,000\t$5.00\nEnterprise\t$42,000+\tCustom\t~$2-4.00\n\nHidden Costs\n\nBeyond subscription fees, consider:\n\nDevelopment Time:\n\nAPI integration: 20-40 hours\nError handling: 10-20 hours\nMaintenance: 2-5 hours/month\n\nInfrastructure:\n\nServers for data processing\nDatabase storage\nMonitoring tools\n\nOverages:\n\nBasic: No overage option—hits stop when exhausted\nPro: Additional costs for exceeding limits\n\nThe Gap Problem\n\nTwitter's pricing creates an uncomfortable gap:\n\nBasic at $100/month gives you 10,000 tweets with a 7-day search limit\nPro at $5,000/month gives you 1,000,000 tweets with full archive\n\nThere's no middle ground. If you need:\n\nMore than 10,000 tweets/month, or\nHistorical search beyond 7 days\n\n...you're looking at a 50x price increase.\n\nThis gap has driven many users toward alternatives.\n\nWhen Alternatives Make Sense\n\nYou Need More Than Basic But Less Than Pro\n\nIf your needs fall between 10,000 and 1,000,000 tweets/month, alternatives often cost less than the $5,000 Pro tier.\n\nExample Scenario:\n\nNeed: 100,000 tweets/month\nOfficial API: $5,000/month (Pro tier required)\nThird-party alternative: $100-500/month typically\n\nYou Need Historical Data\n\nBasic tier's 7-day search limit is dealbreaking for:\n\nCompetitive analysis over time\nTrend research\nHistorical brand monitoring\nAcademic research\n\nAlternatives typically offer historical access at all price points.\n\nYou're Budget-Constrained\n\nMany legitimate use cases don't justify $5,000/month:\n\nAcademic research projects\nStartup MVPs\nSmall business monitoring\nPersonal projects\n\nYou Need Multi-Platform Data\n\nIf you're analyzing Twitter alongside Instagram, TikTok, or Reddit, managing multiple API integrations adds complexity. Some alternatives provide unified access.\n\nAlternative Options\n\nFor a comprehensive comparison of Twitter data providers, see our Best Twitter API Alternatives 2026 guide.\n\nThird-Party Data APIs\n\nServices that provide Twitter data through their own infrastructure:\n\nTypical Pricing: $0.10-0.50 per 1,000 tweets Advantages: Lower cost, historical access, simpler integration Considerations: Data coverage may not be 100%\n\nAI-Native Solutions (MCP)\n\nNatural language access through AI assistants is transforming how teams query social data. Learn more in our tutorial on How to Query Twitter with Claude AI.\n\nExample - Xpoz:\n\nFree: 100,000 results/month\nPro ($20/month): 1,000,000 results/month\nMax ($200/month): 10,000,000 results/month\n\nAdvantages: No coding required, multi-platform, exploratory-friendly Considerations: Requires AI assistant (Claude/ChatGPT)\n\nAcademic Programs\n\nTwitter has offered academic research access with elevated limits:\n\nCheck current availability (programs change)\nRequires institutional affiliation\nApplication and approval process\nRestrictions on commercial use\n\nHow Xpoz Addresses Pricing Gaps\n\nXpoz fills the gap between Twitter's Basic and Pro tiers with predictable, transparent pricing:\n\nPricing Comparison\n\nNeed\tOfficial API\tXpoz\n100K results/month\t$5,000 (Pro required)\t$0 (Free)\n1M results/month\t$5,000 (Pro)\t$20 (Pro)\n10M results/month\t$5,000+ (Pro+)\t$200 (Max)\n\nWhat You Get\n\nFree Tier ($0/month):\n\n100,000 results/month\nTwitter, Instagram, TikTok, Reddit access\nHistorical data\nCSV export\n\nPro Tier ($20/month):\n\n1,000,000 results/month\nAll platforms\nPriority processing\n\nMax Tier ($200/month):\n\n10,000,000 results/month\nAll platforms\nHighest throughput\n\nKey Differentiators\n\nNo Rate Limit Errors: Xpoz handles throttling internally Multi-Platform: One subscription covers Twitter, Instagram, TikTok, Reddit Natural Language: Query data through AI assistants without coding Historical Access: Available on all tiers, including free\n\nSample Queries\n\n\"Find all tweets mentioning 'product launch' from verified accounts\nin the past 30 days, sorted by retweet count\"\n\n\"How many times was 'competitor brand' mentioned on Twitter\nlast month vs. this month?\"\n\n\"Who are the top 100 accounts by follower count that discussed\n'AI regulation' this week?\"\n\n\nDecision Framework\n\nChoose Official API When:\n\nYou need write access (posting tweets, managing accounts)\nYou require Ads API integration\nYour industry has compliance requirements mandating official data sources\nYou need real-time streaming at scale\nBudget accommodates $5,000+/month\n\nChoose Alternatives When:\n\nYour needs fall between Basic and Pro volumes\nYou need historical data without Pro pricing\nYou want multi-platform coverage\nYou prefer natural language over API coding\nBudget is under $500/month\nYou only need read access (no posting)\n\nHybrid Approach\n\nMany organizations use both:\n\nOfficial API Basic ($100/month) for posting and account management\nAlternative for large-scale data retrieval and analysis\n\nThis captures write functionality while keeping read costs manageable.\n\nKey Takeaways\n\nTwitter API pricing has a significant gap between Basic ($100/month, 10K tweets) and Pro ($5,000/month, 1M tweets).\n\nBasic tier's 7-day search limit makes it unsuitable for historical analysis, competitive research, or trend tracking.\n\nAlternatives fill the pricing gap with options between $0-500/month for volumes that would otherwise require Pro.\n\nMulti-platform solutions like Xpoz provide Twitter, Instagram, TikTok, and Reddit access under one subscription.\n\nAI-native access eliminates API integration complexity for teams without dedicated developers.\n\nConsider total cost of ownership including development time, not just subscription fees.\n\nHybrid approaches work using official API for write access and alternatives for cost-effective read access.\n\nFrequently Asked Questions\n\nHow much does the Twitter API cost in 2026?\n\nTwitter API pricing has four tiers: Free ($0 with severe limits), Basic ($100/month for 10K tweets), Pro ($5,000/month for 1M tweets), and Enterprise ($42,000+/month for custom limits). The biggest gap is between Basic and Pro—a 50x price increase with no intermediate option.\n\nIs there a free Twitter API?\n\nYes, Twitter offers a free API tier, but it's extremely limited. You get approximately 1 request per 15 minutes for reading tweets, 1,500 tweets/month posting limit, and no search functionality. It's only suitable for personal bots or basic testing.\n\nWhy did Twitter API become so expensive?\n\nAfter Elon Musk's acquisition in late 2022, Twitter restructured API pricing to reduce free data access and increase revenue. The Basic tier went from free to $100/month, and the Pro tier jumped to $5,000/month—eliminating the previously affordable $99-299/month options.\n\nWhat are cheaper alternatives to Twitter's Pro API?\n\nThird-party data providers like Xpoz offer Twitter data access at a fraction of official API costs. For example, Xpoz provides 1M results/month for $20 (vs $5,000 for official Pro), with multi-platform coverage (Twitter, Instagram, TikTok, Reddit) included.\n\nConclusion\n\nTwitter's API pricing structure serves certain use cases well—particularly large enterprises needing write access and compliance guarantees. But for the vast majority of data retrieval needs, the gap between Basic and Pro creates unnecessary cost barriers.\n\nUnderstanding your actual requirements—tweet volume, historical needs, platform coverage—reveals whether the official API or an alternative approach makes more financial sense. For many users, solutions like Xpoz provide the data access they need at a fraction of official API costs, without sacrificing the functionality that matters for their use case.\n\nStart by estimating your monthly tweet volume, checking if you need historical access, and determining whether you require write functionality. That analysis typically makes the right choice clear.",
    "link": "https://www.xpoz.ai/blog/guides/understanding-twitter-api-pricing-tiers-and-alternatives/",
    "snippet": "Twitter API pricing has four tiers: Free ($0 with severe limits), Basic ($100/month for 10K tweets), Pro ($5,000/month for 1M tweets), and ...",
    "title": "Twitter API Pricing 2026: Tiers, Costs \u0026 Alternatives - Xpoz"
  },
  {
    "content_readable": "This is part one of the Advanced Use Cases series:\n\n1️⃣ Extract Metadata from Queries to Improve Retrieval\n\n2️⃣ Query Expansion\n\n3️⃣ Query Decomposition\n\n4️⃣ Automated Metadata Enrichment\n\nSometimes a single question is multiple questions in disguise. For example: “Did Microsoft or Google make more money last year?”. To get to the correct answer for this seemingly simple question, we actually have to break it down: “How much money did Google make last year?” and “How much money did Microsoft make last year?”. Only if we know the answer to these 2 questions can we reason about the final answer.\n\nThis is where query decomposition comes in. This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach:\n\nDecompose the original question into smaller questions that can be answered independently to each other. Let’s call these ‘sub questions’ here on out.\nReason about the final answer to the original question, based on each sub-answer.\n\nWhile for many query/dataset combinations, this may not be required, for some, it very well may be. At the end of the day, often one query results in one retrieval step. If within that one single retrieval step we are unable to have the retriever return both the money Microsoft made last year and Google, then the system will struggle to produce an accurate final response.\n\nThis method ensures that we are:\n\nretrieving the relevant context for each sub question.\nreasoning about the final answer given each answer based on the contexts retrieved for each sub question.\n\nIn this article, I’ll be going through some key steps that allow you to achieve this. You can find the full working example and code in the linked recipe from our cookbook. Here, I’ll only show the most relevant parts of the code.\n\n🚀 I’m sneaking something extra into this article. I saw the opportunity to try out the structured output functionality (currently in beta) by OpenAI to create this example. For this step, I extended the OpenAIGenerator in Haystack to be able to work with Pydantic schemas. More on this in the next step.\n\nLet’s try build a full pipeline that makes use of query decomposition and reasoning. We’ll use a dataset about Game of Thrones (a classic for Haystack) which you can find preprocessed and chunked on Tuana/game-of-thrones on Hugging Face Datasets.\n\nDefining our Questions Structure\n\nOur first step is to create a structure within which we can contain the subquestions, and each of their answers. This will be used by our OpenAIGenerator to produce a structured output.\n\nfrom pydantic import BaseModel\n\nclass Question(BaseModel):\n    question: str\n    answer: Optional[str] = None\n\nclass Questions(BaseModel):\n    questions: list[Question]\n\n\nThe structure is simple, we have Questions made up of a list of Question. Each Question has the question string as well as an optional answer to that question.\n\nDefining the Prompt for Query Decomposition\n\nNext up, we need to get an LLM to decompose a question and produce multiple questions. Here, we will start making use of our Questions schema.\n\nsplitter_prompt = \"\"\"\nYou are a helpful assistant that prepares queries that will be sent to a search component.\nSometimes, these queries are very complex.\nYour job is to simplify complex queries into multiple queries that can be answered\nin isolation to eachother.\n\nIf the query is simple, then keep it as it is.\nExamples\n1. Query: Did Microsoft or Google make more money last year?\n   Decomposed Questions: [Question(question='How much profit did Microsoft make last year?', answer=None), Question(question='How much profit did Google make last year?', answer=None)]\n2. Query: What is the capital of France?\n   Decomposed Questions: [Question(question='What is the capital of France?', answer=None)]\n3. Query: {{question}}\n   Decomposed Questions:\n\"\"\"\n\nbuilder = PromptBuilder(splitter_prompt)\nllm = OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions})\n\n\nAnswering Each Sub Question\n\nFirst, let’s build a pipeline that uses the splitter_prompt to decompose our question:\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question}})\nprint(result[\"llm\"][\"structured_reply\"])\n\n\nThis produces the following Questions (List[Question])\n\nquestions=[Question(question='How many siblings does Jamie have?', answer=None), \n           Question(question='How many siblings does Sansa have?', answer=None)]\n\n\nNow, we have to fill in the answer fields. For this step, we need to have a separate prompt and two custom components:\n\nThe CohereMultiTextEmbedder which can take multiple questions rather than a single one like the CohereTextEmbedder.\nThe MultiQueryInMemoryEmbeddingRetriever which can again, take multiple questions and their embeddings, returning question_context_pairs. Each pair contains the question and documents that are relevant to that question.\n\nNext, we need to construct a prompt that can instruct a model to answer each subquestion:\n\nmulti_query_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nAnd you have split the task into the following questions:\n{% for pair in question_context_pairs %}\n  {{pair.question}}\n{% endfor %}\n\nHere are the question and context pairs for each question.\nFor each question, generate the question answer pair as a structured output\n{% for pair in question_context_pairs %}\n  Question: {{pair.question}}\n  Context: {{pair.documents}}\n{% endfor %}\nAnswers:\n\"\"\"\n\nmulti_query_prompt = PromptBuilder(multi_query_template)\n\n\nLet’s build a pipeline that can answer each individual sub question. We will call this the query_decomposition_pipeline :\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\nquery_decomposition_pipeline.add_component(\"embedder\", CohereMultiTextEmbedder(model=\"embed-multilingual-v3.0\"))\nquery_decomposition_pipeline.add_component(\"multi_query_retriever\", MultiQueryInMemoryEmbeddingRetriever(InMemoryEmbeddingRetriever(document_store=document_store)))\nquery_decomposition_pipeline.add_component(\"multi_query_prompt\", PromptBuilder(multi_query_template))\nquery_decomposition_pipeline.add_component(\"query_resolver_llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"embedder.embeddings\", \"multi_query_retriever.query_embeddings\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"multi_query_retriever.queries\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"multi_query_retriever.question_context_pairs\", \"multi_query_prompt.question_context_pairs\")\nquery_decomposition_pipeline.connect(\"multi_query_prompt\", \"query_resolver_llm\")\n\n\nRunning this pipeline with the original question “Who has more siblings, Jamie or Sansa?”, results in the following structured output:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question}})\n\nprint(result[\"query_resolver_llm\"][\"structured_reply\"])\n\n\nquestions=[Question(question='How many siblings does Jamie have?', answer='2 (Cersei Lannister, Tyrion Lannister)'),\n           Question(question='How many siblings does Sansa have?', answer='5 (Robb Stark, Arya Stark, Bran Stark, Rickon Stark, Jon Snow)')]\n\n\nReasoning About the Final Answer\n\nThe final step we have to take is to reason about the ultimate answer to the original question. Again, we create a prompt that will instruct an LLM to do this. Given we have the questions output that contains each sub question and answer, we will make these inputs to this final prompt.\n\nreasoning_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nYou have split this question up into simpler questions that can be answered in\nisolation.\nHere are the questions and answers that you've generated\n{% for pair in question_answer_pair %}\n  {{pair}}\n{% endfor %}\n\nReason about the final answer to the original query based on these questions and\naswers\nFinal Answer:\n\"\"\"\n\nresoning_prompt = PromptBuilder(reasoning_template)\n\n\nTo be able to augment this prompt with the question answer pairs, we will have to extend our previous pipeline and connect the structured_reply from the previous LLM, to the question_answer_pair input of this prompt.\n\nquery_decomposition_pipeline.add_component(\"reasoning_prompt\", PromptBuilder(reasoning_template))\nquery_decomposition_pipeline.add_component(\"reasoning_llm\", OpenAIGenerator(model=\"gpt-4o-mini\"))\n\nquery_decomposition_pipeline.connect(\"query_resolver_llm.structured_reply\", \"reasoning_prompt.question_answer_pair\")\nquery_decomposition_pipeline.connect(\"reasoning_prompt\", \"reasoning_llm\")\n\n\nNow, let’s run this final pipeline and see what results we get:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question},\n                                           \"reasoning_prompt\": {\"question\": question}},\n                                           include_outputs_from=[\"query_resolver_llm\"])\n\nprint(\"The original query was split and resolved:\\n\")\n\nfor pair in result[\"query_resolver_llm\"][\"structured_reply\"].questions:\n  print(pair)\nprint(\"\\nSo the original query is answered as follows:\\n\")\nprint(result[\"reasoning_llm\"][\"replies\"][0])\n\n\n🥁 Drum roll please:\n\nThe original query was split and resolved:\n\nquestion='How many siblings does Jaime have?' answer='Jaime has one sister (Cersei) and one younger brother (Tyrion), making a total of 2 siblings.'\nquestion='How many siblings does Sansa have?' answer='Sansa has five siblings: one older brother (Robb), one younger sister (Arya), and two younger brothers (Bran and Rickon), as well as one older illegitimate half-brother (Jon Snow).'\n\nSo the original query is answered as follows:\n\nTo determine who has more siblings between Jaime and Sansa, we need to compare the number of siblings each has based on the provided answers.\n\nFrom the answers:\n- Jaime has 2 siblings (Cersei and Tyrion).\n- Sansa has 5 siblings (Robb, Arya, Bran, Rickon, and Jon Snow).\n\nSince Sansa has 5 siblings and Jaime has 2 siblings, we can conclude that Sansa has more siblings than Jaime.\n\nFinal Answer: Sansa has more siblings than Jaime.\n\n\nWrapping up\n\nGiven the right instructions, LLMs are good at breaking down tasks. Query decomposition is a great way we can make sure we do that for questions that are multiple questions in disguise.\n\nIn this article, you learned how to implement this technique with a twist 🙂 Let us know what you think about using structured outputs for these sorts of use cases. And check out the Haystack experimental repo to see what new features we’re working on.",
    "link": "https://haystack.deepset.ai/blog/query-decomposition",
    "snippet": "This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach.",
    "title": "Advanced RAG: Query Decomposition \u0026 Reasoning - Haystack"
  },
  {
    "content_readable": "Updated\n\n4 days ago\n\nWith millions of conversations happening all over the web each day, it can be a long and tedious task trying to get more relevant mentions and tighten the scope of your query, but with the help of Advanced Topic Query, it can be at your fingertips.\n\nIn Social Listening, you have the option to create an advanced query that is not limited to ANY, ALL, or NONE formatting of query building. Advanced query builder can be used to form complex text queries which are not possible with a normal query builder.\n\nWhat is an Advanced Topic Query?\n\nAdvanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more.\n\nBy using advanced query you can pinpoint relevant information which is not possible with basic topic query.\n\nIt gives you the power to find the needle in a haystack.\n\n​\n\nBasic Topic Query v/s Advanced Topic Query\n\nWith more operators to use you can fetch conversations by language, geography, social media channel, volume, author, #listening, @account monitoring, user segment, and much more, it can give you access to more actionable insights.\n\nIn Basic Query, you can only use boolean operators like OR/ NOT/ AND/ along with NEAR. On the other hand, in Advanced Topic Query, it gives you access to use OR with/ inside AND, NOT (nested and within operator use cases), advanced operators, exact match operators etc.\n\nLet's see the use cases where advanced query will help in getting more insightful mentions –\n\nUse case #1: To search \"pepsi\" OR \"drink\" along with \"cups\".\n\nBasic Query\n\nAdvancd Query\n\nUse case #2: To get mentions of \"pepsi\" along with \"coke\" or \"sprite\" but not \"miranda\" with people having \"follower count\" between 100 to 1000 on \"twitter\".\n\nBasic Query\n\nAdvanced Query\n\nNot feasible in the basic Topic query\n\nThis is where we need the advanced Topic query.​\n\nHow to create an advanced Topic query?\n\nClick the New Tab icon. Under Sprinklr Insights, Click Topics within Listening.\n\nOn the Topics window, click Add Topic in the top right corner. Fill in the required fields and click Create.\n\nIn the Setup Query tab of Create New Topic window, select Advanced Query in the query section.\n\n​\n\nType your query in the Advanced Query field with the required operators and syntax.\n\nClick Save.\n\nTip: While using Instagram as a Listening Source, be sure that your query keywords include hashtags.\n\nWhich operators to use for building Topic queries?\n\nOperators for Topic queries\n\nIn creation of advanced queries along with boolean operators OR/ AND/ NOT/ etc, Sprinklr also supports operator types –\n\nSearch Operators\n\nExact Match Operators\n\nOperators for Getting Post Replies/Comments​\n\nSprinklr provides its user edge by giving them power to use Keywords List inside advanced query along with Operators mentioned.\n\nCreate query using Topic query operators\n\nFollowing are some most used operator examples and their results –\n\nOperator\n\nExample\n\nResult\n\nhello\n\nSearch for the term \"hello\"\n\nsocial sprinklr\n\nSearch for the phrases \"social\" and \"sprinklr\"\n\n​\n\nNote: Using this will show preview but topic can not be saved as it will show error, Use \"Social Sprinklr\" or (Social AND/OR/ NOT/ NEAR Sprinklr) to eliminate error.\n\nAND\n\nsocial AND sprinklr\n\nSearch for \"social\" and \"sprinklr\" anywhere within the complete message, irrespective of keywords between them\n\nOR\n\nsocial OR sprinklr\n\nSearch for \"social\" or \"sprinklr\"\n\nNOT\n\n\"social media\" NOT \"facebook\"\n\nSearch for results that contain \"social media\" but not \"facebook\"\n\n~\n\n\"social media\"~10\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNEAR\n\nsocial NEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNote: This operator can be used with keyword lists.\n\nONEAR\n\nsocial ONEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other in an ordered way\n\nNote: This operator searches social ahead of media.\n\ntitle\n\ntitle: (\"social media\")\n\nSearch for social media in the title of the message\n\nNote: It is mostly used for News, blogs, reviews and other sites.\n\nauthor\n\nauthor: \"social_media\"\n\nFetches all the mentions from author name: social_media\n\nSome other operators which are supported by Sprinklr are –\n\nProximity: It is used to define proximity or distance between 2 keywords only, whereas, NEAR can be used to define proximity between two keywords as well as keyword lists.\n\nOnear (Ordered Near): It sets the order in which the keywords will appear. For example, Keyword-List1 ONEAR/10 Keyword-List2 will ensure keywords from Keyword-List1 appear first and then Keyword-List2 keywords will follow within space of maximum 10.\n\nStep by step guide to make advanced Topic query\n\nUse case\n\nTo write query fetching mentions of ZARA –\n\n​\n\n(# listening is used for instagram listening)\n\nGetting mention along with clothing or fashion related terms only –\n\nRemoving profanity from mention (use case specific) –\n\nRemoving profanity from mention (use case specific) –\n\nAs social media has lots of profane words you can also remove it by making a keyword list and negating it from query –\n\nFiltering Mentions in English –\n\n​\n\nApplying source input as Twitter –\n\nGetting mentions of those users which have followers between 100 to 1000 –\n\n​\n\nAdvanced example showcasing use of Topic query operators and keyword list –\n\nBest practices while using Advanced Query\n\nUse of Parentheses\n\n​Parentheses are not necessary to enclose a search query but can be useful while grouping operations together for more complex queries.\n\n​\n\nFor example, if you want to return results that mention Samsung or Apple phones, and also want to query content that mentions phones along with either Apple or Samsung, you could use parentheses around Apple and Samsung to group three keywords together, as shown below –\n\nphone AND (Apple OR Samsung)\n\n​\n\nUse of parentheses within brackets, is further explained below with an example –\n\n[(internet of things ~3) OR iot OR internetofthings) AND (robots OR robot OR #robot)] NOT [things]\n\nTip: You can also use parentheses within brackets to set off additional operations within the Advanced Query field. The end result should look similar to the result summary of a basic query, built using multiple operations within a single section.\n\n\nAs a part of the rest of the query, this will perform the following operations –\n\nSearch for posts that contain the phrase \"internet of things\" or \"#internetofthings\"\n\nFrom within those results, keep any result that also says \"robots\" or \"robot\" or \"#robot\" within three words (a proximity search) of either \"internet of things\" or \"iot\" or \"internetofthings\".\n\nDiscard any results that just have the phrase \"things\" within.\n\nParentheses nested within brackets intend to set off different operations as isolated processes. In the previous example, if you build an Advanced Query that states [(internet of things OR iot OR internet of things) AND (robots OR robot OR #robot)] your query will return results that contain ANY of the first three terms and the second three terms.\n\nHowever, if you build an Advanced Query that states [internet of things OR iot OR internet of things AND robots OR robot OR #robot], your query will return any result that contains the phrase \"internet of things\" or the word \"iot\" or the word \"robot\" or the hashtag #robot or specifically the phrase \"internet of things\" within the same message as the word \"robots\".\n\nNote:\n\nYou cannot use a \"NOT\" statement with an \"OR\" statement.\n\n\nExample:\n( social OR NOT media ) ❌\n( social NOT media ) ✅\n\n(( social OR ( media NOT facebook )) ✅\n\nWhy?\n\nQuery should not contain \"NOT\" terms in \"OR\" with other terms, \"NOT\" clauses should be used in \"AND\" with other terms, using \"NOT\" in \"OR\" will bring too much data.\n\nUse of Quotation marks\n\nQuotation marks can be used for phrases in which you are looking for an exact match of those particular words in a specific order. Using parentheses or quotation marks for single-word queries is not mandatory.\n\nUse straight quotation marks ( \" \" ) for outlining phrases within it. The use of curved quotation marks (“ ”) will not produce your desired results.\n\nParentheses are generally used to group keywords or phrases joined by one or more operators together, but with other keywords involved, parentheses and quotations would act differently. For example –\n\nVersion 1: \"Phil Schiller\" AND \"Apple Marketing\" will return results for content with the exact phrase Phil Schiller (or phil schiller) and the exact phrase Apple Marketing (or apple marketing).\n\nNote: Here exact does not mean case sensitive as in the case of exactMessage Operator.\n\nExample: exactMessage: (\"Phil Schiller\" AND \"Apple Marketing\"), which will fetch results for phrase Phil Schiller (not phil schiller) and the exact phrase Apple Marketing (not apple marketing).\n\n\nVersion 2: \"Phil Schiller\" AND (Apple OR Marketing) will return results for content with the phrase \"Phil Schiller\" (together) and at least one of the words, Apple or Marketing.\n\nHandling for Broad \u0026 Ambiguous Keywords\n\nIt is very important to not use/reduce use of broad keywords in advanced queries. Broader keywords will fetch mentions that are unrelated to topic of interest, and eventually hinder dashboard/insights\n\nFor all keywords used in an advanced topic query, ensure they are directly related to the topic of interest.\n\nIn case keywords are broad but relevant to topic, they should be tied to some relevant keywords related to that topic, by using NEAR Operators\n\nExample: Robot is an important keyword for Robot Company. However just using this keyword will fetch irrelevant keywords as it’s a broad keyword used for other entities as well (Robot Street, etc).\n\nInstead of using just Robot keyword, we should use: Robot NEAR/4 (Technology OR “machine” OR # tech OR IOT OR “Internet of things” ….)\n\nNote how keywords related to Robot are used with NEAR Operator. Related keywords could be related entities, industry keywords, parent company, country keywords, etc.\n\nFrequently asked questions\n\n​\n\nIs it compulsory to put quotation marks around phrases like \"apple music\" or can we use apple music directly?\n\nHow can I eliminate posts with many spam #’s or @’s?\n\nCan exact match or parent operators be used in advanced query?\n\nWhy am I able to see mentions in preview during making of topic but not in dashboard?\n\nDuring listening to @ mentions a lot of spam mentions are also getting tagged along, e.g. like wanting to get mentions of @tom but messages of @tom_fan56 are also coming. How to remove these irrelevant mentions?\n\nIf I write query as “tom” will it also fetch mentions such as tom_jerry / @tom / #tom ?\n\n​",
    "link": "https://www.sprinklr.com/help/articles/faqs-and-advanced-usecases/create-an-advanced-topic-query/646331628ea3c9635cf36711",
    "snippet": "Advanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more. By ...",
    "title": "‎Create an Advanced Topic Query | Sprinklr Help Center"
  },
  {
    "content_readable": "July 8, 2021, 7:10pm 1\n\nHello! I’m struggling to figure out how to write a query to get all the todos on a particular page. I thought there might be a generic way to do this for any page, but I’m even struggling to get it to work for a specific page using the page’s name. I think part of it is that I’m just learning the query syntax. Here’s what I’ve tried and learned:\n\n{{query todo}} gives all the todos in my entire database\nThen I tried to use a filter by page: {{query todo [[the book]] }} (“the book” is an alias to the page I’m trying to collect todos from). This is giving the same results as #1, which surprised me.\nI thought, maybe I need to explicitly use logical expressions to tie the parts of the query together. So I tried {{query (and todo [[the book]]) }}, which was empty.\n\nI’d appreciate any help filling the gaps in my knowledge and creating a query that collects todos on a given page.\n\n5 Likes\n\nc6p July 23, 2021, 2:02pm 2\n\nQuery works for me when To-Do states explicitly specified {{ query (and (todo todo doing) [[the book]]) }}.\n\n1 Like\n\njafish July 23, 2021, 3:21pm 3\n\nThank you @c6p, that worked! Now, any ideas on making this a more general query that will work on any page? I see that for templates there is a “current page” variable, but that doesn’t seem to apply to queries. I’ve also noticed some references to current page in the advanced query syntax, so maybe that’s where I need to be looking…\n\n1 Like\n\nc6p July 23, 2021, 5:56pm 4\n\nCheck out https://logseq.github.io/#/page/dynamic%20variables\n\n{{query (and (todo todo doing) (page \u003c% current page %\u003e))}} grabs To-dos for current page.\n\n10 Likes\n\njafish July 24, 2021, 1:24am 5\n\nThanks, I missed the “syntax” line on that documentation page when I was looking at it. I appreciate you taking the time to reply and share!\n\nLuhmann July 24, 2021, 2:05am 6\n\nSee my feature request for a “page specific NOW section”.\n\n1 Like\n\nsabre23t July 25, 2021, 12:32am 7\n\nShouldn’t this later reply by c6p be marked the solution @jafish?\n\n2 Likes\n\njafish July 25, 2021, 1:29am 8\n\nYes, thanks! I didn’t realize you could only mark one solution. I had marked them both, in reverse order, and, well, you saw what happened\n\n1 Like\n\nHulk September 6, 2021, 5:21am 9\n\nThis solution does not work for me. I have even copied and pasted your syntax directly in. Weird…\n\nc6p September 17, 2021, 7:40pm 10\n\nIt works for me. Could you make it work? Or what is your output? If you still have problems, you can get quicker responses on discord.\n\nHulk September 21, 2021, 7:18pm 11\n\nOkay. It’s working! I think it must have been another part of my query which went wrong.\n\n1 Like\n\nThanks! This works fine on my page (and is another game changer vs. Roam, where I just come from). However, when opening the same page in the sidebar the query gives me the TODOs from the page opened in my main view. Is this intended behavior or a bug?\n\nxavon January 16, 2024, 8:14pm 13\n\nis possible to target alias with dynamic variable ?\nbecause using \u003c% current page %\u003e , it only get the title of the page\n\nmentaloid January 16, 2024, 8:27pm 14\n\nThis should be possible with Advanced Queries, using :current-page and :block/alias .\n\nxavon January 16, 2024, 8:34pm 15\n\nmay i ask, if you could help me on that query ?\n\ni have a simple query that is inside a template:\n{{query (page-property -zoom \u003c% current page %\u003e )}}\n\ntried this but didnt work:\n{{query (page-property -zoom or(:current-page :block/alias) )}}\n\nmentaloid January 16, 2024, 8:44pm 16\n\nAs said, this needs Advanced Queries. You need to:\n\nfollow the link\ndo some reading\nexperiment and familiarize with advanced queries\ncome back when you have an advanced query to work with\n\nxavon January 16, 2024, 9:39pm 17\n\nshure i tried, but didnt get results:\n\n#+BEGIN_QUERY\n{\n:query [:find (pull ?p [*])\n:where\n(or\n(property ?p :-zoom :current-page)\n(property ?p :-zoom :block/alias)\n)]}\n#+END_QUERY\n\nso in the current page that goes that query , i want to find all the other pages that have the -zoom property that contains the title or alias of the page that goes the query\n\nmentaloid January 16, 2024, 10:56pm 18\n\nHere are two options:\n\n#+BEGIN_QUERY\n{\n :query [:find (pull ?p [*])\n   :in $ ?current-name\n   :where\n     (or-join [?p ?current-name]\n       (property ?p :-zoom ?current-name)\n       (and\n         [?current :block/name ?current-name]\n         [?current :block/alias ?alias]\n         [?alias :block/name ?alias-name]\n         (property ?p :-zoom ?alias-name)\n       )\n     )\n ]\n :inputs [:current-page]\n}\n#+END_QUERY\n\n\n#+BEGIN_QUERY\n{\n :query [:find (pull ?p [*])\n   :in $ ?current-name\n   :where\n     [?current :block/name ?current-name]\n     (or-join [?current ?name]\n       [?current :block/name ?name]\n       (and\n         [?current :block/alias ?alias]\n         [?alias :block/name ?name]\n       )\n     )\n     (property ?p :-zoom ?name)\n ]\n :inputs [:current-page]\n}\n#+END_QUERY\n\n\nxavon January 17, 2024, 12:00am 19\n\nyou are good on this bro.\nThe second one works, is querying correct , i just have one doubt on the way the information is returning\nI see that you use in the query,\n\n[:find (pull ?p [*])\n\n\nbut it is returning me the block instead of the page name, why ? i though the ?p would return me only the page name (that is what im seeking)\n\nmentaloid January 17, 2024, 8:02am 20\n\n?p is defined as a block (or actually something that has properties).\n\nIt participates in the query as a mere number from the database.\nTo get its name, add [?p :block/name ?p-name]\n\nThat will also limit the results to pages, as only page-blocks have a name.\nThen to list that name, replace (pull ?p [*]) with ?p-name",
    "link": "https://discuss.logseq.com/t/query-todos-on-current-page/1481",
    "snippet": "I'm struggling to figure out how to write a query to get all the todos on a particular page. I thought there might be a generic way to do this for any page.",
    "title": "Query Todos on Current Page - Questions \u0026 Help - Logseq"
  },
  {
    "content_readable": "Introduced in 2020, the GitHub user profile README allow individuals to give a long-form introduction. This multi-part tutorial explains how I setup my own profile to create dynamic content to aid discovery of my projects:\n\nwith the Liquid template engine and Shields (Part 1 of 4)\nusing GitHub's GraphQL API to query dynamic data about all my repos (keep reading below)\nfetching RSS and Social cards from third-party sites (Part 3 of 4)\nautomating updates with GitHub Actions (Part 4 of 4)\n\nYou can visit github.com/j12y to see the final result of what I came up with for my own profile page.\n\nThe GitHub Repo Gallery\n\nThe intended behavior for my repo gallery is to create something similar to pinned repositories but with a bit more visual pizzazz to identify what the projects are about.\n\nIn addition to source code, the repo can have metadata associated with it:\n\n✔️ Name of the repository\n✔️ Short description of the project\n✔️ Programming language used for the project\n✔️ List of tags / topics\n✔️ Image that can be used for social cards\n\nAbout\n\nThe About has editable fields to set the description and topics.\n\nSettings\n\nThe Settings includes a place to upload an image for social media preview cards.\n\nIf you don't set a preview card image, GitHub will generate one automatically that includes some basic profile statistics and your user profile image.\n\nGetting Started with the GitHub REST API\n\nThe way I structured this project is to build a library of any functions related to querying GitHub in src/gh.ts. I used a .env file to store my personal access (classic) token for authentication during local development.\n\n├── package.json\n├── .env\n├── src\n│   ├── app.ts\n│   ├── gh.ts\n│   └── template\n│       ├── README.liquid\n│       ├── contact.liquid\n│       └── gallery.liquid\n└── tsconfig.json\n\n\nI started by using REST endpoints with the Octokit library and TypeScript bindings.\n\n// src/gh.ts\nimport { Octokit } from 'octokit';\nimport { RestEndpointMethodTypes } from '@octokit/plugin-rest-endpoint-methods'\nconst octokit = new Octokit({ auth: process.env.TOKEN});\n\nexport class GitHub {\n    // GET /users/{user}\n    // https://docs.github.com/en/rest/users/users#get-a-user\n    async getUserDetails(user: string): Promise\u003cRestEndpointMethodTypes['users']['getByUsername']['response']['data']\u003e {\n        const { data } = await octokit.rest.users.getByUsername({\n            username: user\n        });\n\n        return data;\n    };\n}\n\n\nFrom src/app.ts I initialize the GithHub class, fetch the results, and can inspect the data being returned as a way to get comfortable with the various endpoints.\n\n// src/app.ts\nimport dotenv from 'dotenv';\nimport { GitHub } from \"./gh\";\n\nexport async function main() {\n  dotenv.config();\n  const gh = new GitHub()\n\n  const details = await gh.getUserDetails();\n  console.log(details);\n}\nmain();\n\n\nI typically get started on projects with simple tests like this to make sure all the various pieces to an integration can be configured and work together before getting too far.\n\nUse the GitHub GraphQL Endpoint\n\nTo get the data needed for the gallery layout, it would be necessary to make multiple calls to REST endpoints. In addition there is some data not yet available from the REST endpoint at all.\n\nSwitching to query using the GitHub GraphQL interface becomes helpful. This single endpoint can process a number of queries and give precise control over the data needed.\n\n💡 The GitHub GraphQL Explorer was fundamentally useful for me to get the right queries defined\n\nThis query needs authorization with the personal access token to fetch profile details about followers similar to some of the details returned from the REST endpoints.\n\n// src/gh.ts\n\nconst { graphql } = require(\"@octokit/graphql\")\n\nexport class GitHub \n    // https://docs.github.com/en/graphql\n    graphqlWithAuth = graphql.defaults({\n        headers: {\n            authorization: `token ${process.env.TOKEN}`\n        }\n    })\n\n    async getProfileOverview(name: string): Promise\u003cany\u003e {\n        const query = `\n            query getProfileOverview($name: String!) { \n                user(login: $name) { \n                    followers(first: 100) {\n                        totalCount\n                        edges {\n                            node {\n                                login\n                                name\n                                twitterUsername\n                                email\n                            }\n                        }\n                    }\n                }\n            }\n        `;\n        const params = {'name': name};\n\n        return await this.graphqlWithAuth(query, params);\n    }\n}\n\n\nThere are other resources such as Learn GraphQL if you haven't written many queries yet which explains the basics around syntax, schemas, and types.\n\nGetting used to GitHub's GraphQL schema primarily involves walking a series of edges to find linked nodes for objects of interest and their data attributes. In this case, I started by querying a user profile, finding the list of linked followers, and then inspecting their corresponding node's login, name, and email address.\n\n   ┌────────────┐\n   │    user    │\n   └─────┬──────┘\n         │\n         └──followers\n               │\n               ├─── totalCount\n               │\n               └─── edges\n                     │\n                     └── node\n\n\n\nFaceted Search by Topic Frequency\n\nI often want to find repositories by a topic. The user interface makes it easy to filter among many repositories by programming language such as python but unless you know which topics are relevant can become hit or miss. Was it nlp or nltk I used to categorize related repositories. Did I use dolby or dolbyio to identify repos I have for work projects.\n\nA faceted search that narrows down the number of matching repositories can be helpful for finding relevant projects like this. Given topics on GitHub are open-ended and not constrained to fixed values, it can be easy to accidentally categorize repos with variations like lambda and aws-lambda such that searches only identify partial results.\n\nTo address this, a GraphQL query gathering topics by frequency of usage within an organization or individual account can help with identifying the most useful topics.\n\nThe steps for this would be:\n\nQuery repository topics\nProcess results to group topics by frequency\nUse a template to render the gallery\n\n1 - Query Repository Topics\n\nI used the following GraphQL query to fetch my repositories and their corresponding topics.\n\nconst query = `\n    query getReposOverview($name: String!) {\n        user(login: $name) {\n            repositories(first: 100 ownerAffiliations: OWNER) {\n                edges {\n                    node {\n                        name\n                        url\n                        description\n                        openGraphImageUrl\n                        repositoryTopics(first: 100) {\n                            edges {\n                                node {\n                                    topic {\n                                        name\n                                    }\n                                }\n                            }\n                        }\n                        primaryLanguage {\n                            name\n                        }\n                    }\n                }\n            }\n        }\n    }\n`;\n\n\nThis query starts by filtering by user owned repositories (not counting forks) along with the metadata such as the social image.\n\n2 - Process Results and Group Topics by Frequency\n\nIterating over the results of the query the convention used was to look for anything with the topic github-gallery as something to be featured in the gallery. We also get a count of usage for each of the other topics and programming languages.\n\nvar topics: {[id: string]: number } = {};\nvar languages: {[id: string]: number } = {};\nvar gallery: {[id: string]: any } = {};\n\nconst repos = await gh.getReposOverview(user);\nfor (let repo of repos.user.repositories.edges) {\n  // Count occurrences of each topic\n  repo.node.repositoryTopics.edges.forEach((topic: any) =\u003e {\n    if (topic.node.topic.name == 'github-gallery') {\n      gallery[repo.node.name] = repo;\n    } else {\n      topics[topic.node.topic.name] = topic.node.topic.name in topics ? topics[topic.node.topic.name] + 1 : 1;\n    }\n  });\n\n  // Count and include count of language used\n  if (repo.node.primaryLanguage) {\n    languages[repo.node.primaryLanguage.name] = repo.node.primaryLanguage.name in languages ? languages[repo.node.primaryLanguage.name] + 1 : 1;\n  }\n}\n\n\n3 - Use a template to render the gallery\n\nThe topics are ordered by how often they are used. From the previous post on setting up a dynamic profile, I'm passing scope to the liquid engine for any data to be made available in a template.\n\n  // Share topics sorted by frequency of use for filtering repositories\n  // from the organization\n  scope['topics'] = Object.entries(topics).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n  scope['languages'] = Object.entries(languages).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n\n  // Gather topics across repos\n  scope['gallery'] = Object.values(gallery);\n\n\n\nThe repository page on GitHub uses query parameters to sort and filter, so items like topic:nltk can be passed directly in the URL to load a filtered view of repositories. The shields create a nice looking button for navigating to the topic, and use of icons for programming languages helps find relevant code samples.\n\n\u003cp\u003eExplore some of my projects: \u003cbr/\u003e\n{% for language in languages %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=language%3A{{language[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ language[0] }}-{{ language[1] }}-lightgrey?logo={{ language[0] }}\u0026label={{ language[0] }}\u0026labelColor=000000\" alt=\"{{ language[0] }}\"/\u003e\u003c/a\u003e {% endfor %}\n{% for topic in topics %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/static/v1?label={{topic[0]}}\u0026message={{ topic[1] }}\u0026labelColor=blue\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\n\nThe presentation includes a 3-column row in a table for displaying the metadata about each featured gallery project. This could display all repositories, but limiting to one or two rows seems sensible for managing screen space.\n\n{% for tile in gallery limit:3 %}\n\u003ctd width=\"25%\" valign=\"top\" style=\"padding-top: 20px; padding-bottom: 20px; padding-left: 30px; padding-right: 30px;\"\u003e\n\u003ca href=\"{{ tile.node.url }}\"\u003e\u003cimg src=\"{{ tile.node.openGraphImageUrl }}\"/\u003e\u003c/a\u003e\n\u003cp\u003e\u003cb\u003e\u003ca href=\"{{ tile.node.url }}\"\u003e{{ tile.node.name }}\u003c/b\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e{{ tile.node.description }}\u003cbr/\u003e\n{% for topic in tile.node.repositoryTopics.edges %} \u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic.node.topic.name }}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ topic.node.topic.name | replace: \"-\", \"--\" }}-blue?style=pill\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\u003c/td\u003e\n{% endfor %}\n\n\nWith all of that put together, we now have a gallery that displays a picture along with the name, description, and tags. The picture can highlight a user interface, architectural diagram, or some other branded visual to help identify the purpose of the project visually.\n\nWe can also use this to maintain our list of topics and make finding relevant topics for an audience easier to discover.\n\nLearn more\n\nI hope this overview helps with getting yourself sorted. The next article will dive into some of the other ways of aggregating content.\n\nFetching RSS and Social Cards for GitHub Profile (Part 3 of 4)\nAutomating GitHub Profile Updates with Actions (Part 4 of 4)\n\nDid this help you get your own profile started? Let me know and follow to get notified about updates.",
    "link": "https://dev.to/j12y/query-github-repo-topics-using-graphql-35ha",
    "snippet": "Creating a customized user profile page for GitHub to showcase work projects and make navigation to relevant topics easier.",
    "title": "Query GitHub Repo Topics Using GraphQL - DEV Community"
  },
  {
    "content_readable": "December 22, 2023, 10:47am 1\n\nI have variables as shown below. What is the explanation for the fact that I can access both pages with query parameters in Staging environment, but not in production for the one that I created recently? (I can see parameters on the page created 1 month ago) When I look at the related topics in the forum, it is recommended not to use url path. That’s why I am trying this way. Is it bug caused by WeWeb or am I missing something? Is there a period of 30 minutes, 1 hour or a few days after publishing? In order for the URLs to be updated in a way that can come with parameters. Or do you need to make settings from a different place to display the parameters in the urls, such as project settings?\n\nJoyce (Joyce) December 22, 2023, 4:02pm 2\n\nHi @mert.incesu\n\nCan you provide us with more information about how things are set up at page level, maybe record a short video?\n\nNot sure where you read this but there are pros and cons to both approaches (URL path and query) as detailed in this academy video about dynamic collection pages.\n\nNope, the change should be live as soon as the app finished publishing.\n\nI’m not sure how things are set up on your side. If the academy video above doesn’t help, could you record a video so I can better understand what you’re trying to do and hopefully point you in the right direction?\n\n1 Like\n\nbenFortunato (ben@mybuildplan.co) March 11, 2024, 3:09pm 3\n\nLooking through the video but I could find anything that talks about the difference between using url parameters in the format https://domain/page?param1=value1\u0026param2=value2 and using a unique page ID through a parameter.\n\nI’m trying to set up a filter that generates query parameters in the url so that way I can send a QR code that links to a specific search or users can share a specific view or search.\n\nSince parameter are internal to the app there is no need to pass parameters through a url string if the user is navigating through the site. Once the link is shared we would need to pull the parameters from the url string.\n\nFigured out how to add a “parameter” to the url through the page settings but that just adds text to create a unique page name.\n\nHow can I create a set of proper url parameters that I can use to set variables?\n\nliam_putra (William anputra) July 21, 2024, 4:10pm 4\n\nNo update on this from the weweb team ?\n\n1 Like\n\njptrinh (Jean-Philippe Trinh)\n\nJuly 24, 2024, 1:07pm 5\n\nHey @benFortunato @liam_putra\n\nSorry for the delay, hope this answer martches what you are trying to achieve!\n\nTo access page parameters, you have access to them in the Formula panel:\n\nHere is how to set Query parameters\n\nYou can set query variables from the Data/Variables panel.\nWhen you do that, from any link, you can set those variable by adding an item in the Queries section. Or set the value directly from the url.\nOn the page where you want to use the value of the query param, you can bind the variable as you would with any variable. Or use the query variable in Browser/query/yourParam.\n\n\n2 Likes\n\nbenFortunato (ben@mybuildplan.co) July 24, 2024, 2:02pm 6\n\nI would add to this a quick tip that triped me up. If you create a page that needs query parameters to load, it won’t show up in the weweb editor unless you manually put those parameter in the URL bar. For example if you have the parameter Myparameter1=value1 you will need to navigate to the weweb editor page and manually append those to the URL. For example: https://weweb-editor-url should be https://weweb-editor-url?myparameter1=value1\u0026myparameter2=value 2\n\nWhen you hit enter the page will reload and weweb will be able to load the parameter from the browser into the editor just like it will when you publish the site.\n\nHope that helps, that took me a while to figure out.\n\n2 Likes\n\nliam_putra (William anputra) July 26, 2024, 4:03am 7\n\nhmmm… how did you avoid that ?\n\nit might not take too much time to keep manually append those parameter on the URL, just wondering if theres more an automated solution ?\n\nliam_putra (William anputra) July 26, 2024, 4:03am 8\n\njptrinh (Jean-Philippe Trinh) July 26, 2024, 8:04am 9\n\nI modified my previous answer a bit. Got mixed up with Page parameters and Query parameters!\nIf you want multiple query parameters, you should use the second method\n\nAnd you can set a worfklow and change the value of your query variables to quickly set them!\n\nAntiokh (Anton) May 28, 2025, 3:48am 10\n\nThat’s fantastic! No answer to this question since Mar’2024.\n\nthe answer is: use ?wwParam-**param=test123**\n\n1 Like",
    "link": "https://community.weweb.io/t/query-parameters-in-url/6074",
    "snippet": "I'm trying to set up a filter that generates query parameters in the url so that way I can send a QR code that links to a specific search or users can share a ...",
    "title": "Query parameters in url - Ask us anything - WeWeb Community"
  }
]
s4 llm_format success 2026-03-01 22:37:31 → 2026-03-01 22:38:00
Input (73516 bytes)
[
  {
    "content_readable": "The X API uses pay-per-usage pricing. No subscriptions—pay only for what you use.\n\nHow it works\n\nCredit-based\n\nPurchase credits upfront in the Developer Console. Credits are deducted as you make API requests.\n\nPer-endpoint pricing\n\nDifferent endpoints have different costs. View current rates in the Developer Console.\n\nNo commitments\n\nNo contracts, subscriptions, or minimum spend. Start and stop anytime.\n\nReal-time tracking\n\nMonitor usage and costs live in the Developer Console.\n\nEarn free xAI API credits when you purchase X API credits—up to 20% back based on your spend. Learn more\n\nIf you are on a legacy subscription package (Basic or Pro), you can opt in to Pay-per-use pricing directly from the Developer Console. If you’d like to switch back to your legacy plan at any time, you can do so from the settings page within the Developer Console.\n\nDeduplication\n\nAll resources are deduplicated within a 24-hour UTC day window. If you request and are charged for a resource (such as a Post), requesting the same resource again within that window will not incur an additional charge. This means:\n\nRequesting the same Post multiple times in a day counts as one charge\nThe deduplication window resets at midnight UTC\nThis applies to all billable resources (Posts, users, etc.)\n\nDeduplication is a soft guarantee. While it occurs in the vast majority of cases, there may be specific edge cases like service outages that result in resources not being deduplicated.\n\nCredit balance\n\nYour credit balance is displayed in the Developer Console. Credits are deducted in real-time as you make API requests.\n\nMonitor your credit balance regularly to avoid service interruptions. Add credits before your balance reaches zero to ensure uninterrupted API access.Note: It is possible for an account credit balance to go slightly negative. In this case, API requests will be blocked until you add credits to cover the negative balance.\n\nAuto-recharge\n\nEnable auto-recharge to automatically top up your credit balance and avoid service interruptions. Configure this in the Developer Console:\n\nSetting\tDescription\nRecharge amount\tThe amount to add when auto-recharge triggers (e.g., $25)\nTrigger threshold\tAuto-recharge activates when your balance falls below this amount (e.g., $5)\n\nAuto-recharge requires a saved payment method set as your default. You can cancel anytime in the Developer Console or by contacting support.\n\nSpending limits\n\nSet a maximum amount you can spend per billing cycle to control costs. When the limit is reached, API requests will be blocked until the next billing cycle.\n\nOption\tDescription\nSpending limit\tSet a specific dollar amount as your maximum spend per billing cycle\n\nUse spending limits to prevent unexpected charges, especially during development and testing.\n\nFree xAI API Credits\n\nWhen you purchase X API credits, you can earn free xAI API credits based on your cumulative spend during a billing cycle.\n\nTo receive free xAI credits, you must link your xAI team to your X developer account. You can do this by visiting your account settings in the developer console.\n\nHow it works\n\nYour cumulative spend is tracked throughout each billing cycle. As you cross spending thresholds, you unlock higher reward rates. When a new billing cycle starts, your cumulative spend resets to $0.\n\nCumulative spend\tRate\n$0 – $199\t0%\n$200 – $499\t10%\n$500 – $999\t15%\n$1,000+\t20%\n\nThe rate applies to your entire cumulative balance, but you only receive the delta—what’s newly owed minus what was already credited.\n\nExample\n\nSuppose you make several purchases throughout a billing cycle:\n\nPurchase\tRate\tTotal owed\tAlready credited\tYou receive\n$100\t0%\t$0\t$0\t$0\n$100\t10%\t$20\t$0\t$20\n$150\t10%\t$35\t$20\t$15\n$150\t15%\t$75\t$35\t$40\n$250\t15%\t$112.50\t$75\t$37.50\n$250\t20%\t$200\t$112.50\t$87.50\n$1,000\t$200\n\nThis is the same amount you’d receive from a single $1,000 purchase—the order and size of purchases doesn’t affect your total rewards.\n\nMonitoring usage\n\nTrack your API usage programmatically with the Usage endpoint:\n\ncurl \"https://api.x.com/2/usage/tweets\" \\\n  -H \"Authorization: Bearer $BEARER_TOKEN\"\n\n\nThis returns daily Post consumption counts, helping you:\n\nTrack consumption against your budget\nSet up alerts when approaching limits\nIdentify high-consumption endpoints\nGenerate usage reports\n\nEnterprise pricing\n\nFor high-volume access with dedicated support, custom rate limits, and additional features, contact our enterprise sales team.\n\nPay-per-usage plans are subject to a monthly cap of 2 million Post reads. If you need higher volume, consider an Enterprise plan.\n\nNext steps",
    "link": "https://docs.x.com/x-api/getting-started/pricing",
    "snippet": "Credit-based. Purchase credits upfront in the Developer Console. Credits are deducted as you make API requests.",
    "title": "Pricing - X - X Developer Platform"
  },
  {
    "content_readable": "The X API gives you programmatic access to X’s public conversation. Read posts, publish content, manage users, and analyze trends—all through modern REST endpoints with flexible pay-per-usage pricing.\n\nWhat you can build\n\nPricing\n\nThe X API uses pay-per-usage pricing. No subscriptions—pay only for what you use.\n\nFlexible scaling\n\nStart small and grow. Costs scale with your actual usage.\n\nNo commitments\n\nNo contracts or minimum spend. Stop anytime.\n\nReal-time tracking\n\nMonitor usage and costs live in the Developer Console.\n\nCredit-based\n\nPurchase credits upfront. Deducted as you use the API.\n\nEarn free xAI API credits when you purchase X API credits—up to 20% back based on your spend. Learn more\n\nPay-per-usage plans are subject to a monthly cap of 2 million Post reads. If you need higher volume, consider an Enterprise plan.\n\nKey features\n\nData access\n\nNear real-time streaming\n\nSearch \u0026 analytics\n\nRich data objects\n\nAccess detailed, structured data for posts, users, media, and more:\n\nPosts: Full text, metrics, entities, annotations, conversation threads\nUsers: Profiles, follower counts, verification status\nMedia: Images, videos, GIFs with metadata\nPolls: Options and vote counts\n\nCustomize responses with fields and expansions to get exactly the data you need.\n\nFiltered stream\n\nGet posts delivered in near real-time as they’re published. Define up to 1,000 filtering rules to receive only matching posts.\n\n# Add a rule\ncurl -X POST \"https://api.x.com/2/tweets/search/stream/rules\" \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -d '{\"add\": [{\"value\": \"from:xdevelopers\"}]}'\n\n# Connect to stream\ncurl \"https://api.x.com/2/tweets/search/stream\" \\\n  -H \"Authorization: Bearer $TOKEN\"\n\n\nLearn more about filtered stream →\n\nFull-archive search\n\nSearch the complete history of public posts—back to 2006. Build queries with operators for users, keywords, dates, and more.\n\ncurl \"https://api.x.com/2/tweets/search/all?query=AI%20lang:en\" \\\n  -H \"Authorization: Bearer $TOKEN\"\n\n\nMetrics\n\nAccess engagement metrics including impressions, likes, reposts, replies, and video views.Learn more about search →\n\nAPI versions\n\nVersion\tStatus\tDescription\nv2\tCurrent\tModern endpoints, flexible pricing, all new features\nEnterprise\tAvailable\tHigh-volume access with dedicated support\n\nUse X API v2 for all new projects. It’s where all new features and improvements are released.\n\nQuick start\n\n1\n\nCreate a developer account\n\n2\n\nGet your credentials\n\nGenerate your Bearer Token for app-only requests.\n\n3\n\nMake a request\n\nTry looking up a user:\n\ncurl \"https://api.x.com/2/users/by/username/xdevelopers\" \\\n  -H \"Authorization: Bearer $BEARER_TOKEN\"\n\n\nTools \u0026 libraries\n\nBrowse all libraries →\n\nSupport",
    "link": "https://docs.x.com/x-api/introduction",
    "snippet": "Pricing. The X API uses pay-per-usage pricing. No subscriptions—pay only for what you use. Flexible scaling. Start small and grow. Costs scale with your ...",
    "title": "Programmatic access to X's posts, users, spaces, and more - X API"
  },
  {
    "content_readable": "Crawler is not allowed!",
    "link": "https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476",
    "snippet": "Legacy Free API Users: If you've been recently active on our Legacy Free tier, you'll be transitioned to Pay-Per-Use with a one-time $10 voucher ...",
    "title": "Announcing the Launch of X API Pay-Per-Use Pricing"
  },
  {
    "content_readable": "whoa there, pardner!\n\nReddit's awesome and all, but you may have a bit of a problem. We've seen far too many requests come from your IP address recently.\n\nPlease wait a few minutes and try again.\n\nIf you're still getting this error after a few minutes and think that we've incorrectly blocked you or you would like to discuss easier ways to get the data you want, please contact us at this email address.\n\nYou can read Reddit's Terms of Service here.\n\nWhen contacting us, please include your Reddit account along with the following code:\n\n019cab55-93c1-71c4-976e-25cbe0c9b67d",
    "link": "https://www.reddit.com/r/SaaS/comments/1gh6qvj/twitters_pricing_is_ridiculous/",
    "snippet": "\"Additionally, effective July 1, 2025, we will discontinue our existing Enterprise API tiers and introduce a new streamlined v2 API tier ...",
    "title": "Twitter's Pricing is Ridiculous! : r/SaaS - Reddit"
  },
  {
    "content_readable": "",
    "link": "https://developer.x.com/",
    "snippet": "Fixed monthly fees ($200 or $5,000) · Tier-dependent rate limits · Monthly caps on objects · Costs jump with tiered plans ...",
    "title": "X Developer Platform - Twitter"
  },
  {
    "content_readable": "",
    "link": "https://medium.com/@asaan/twitter-api-changes-navigating-the-end-of-free-access-your-2024-guide-b9f9cf47ea79",
    "snippet": "New Twitter API Pricing Tiers · Basic Tier: $100/month for access to 10,000 tweets per month · Pro Tier: $5,000/month for access to 2 million ...",
    "title": "Twitter API Changes: What You Need to Know About the End of Free ..."
  },
  {
    "content_readable": "The X API provides programmatic access to X’s public conversation. Retrieve posts, analyze trends, build integrations, and create new experiences on the platform.\n\nWhat you can do\n\nCapability\tDescription\nRead posts\tSearch, look up, and stream posts in real-time\nPublish content\tCreate posts, replies, and threads\nManage users\tLook up users, manage follows, blocks, and mutes\nAnalyze data\tAccess metrics, trends, and engagement analytics\nBuild integrations\tSend DMs, manage lists, and interact with Spaces\n\nAPI versions\n\nX API v2 (Recommended)\n\nX API v1.1 (Legacy)\n\nEnterprise\n\nThe current version of the X API with modern features and flexible pricing.Why use v2:\n\nPay-per-usage pricing\nModern JSON response format\nFlexible fields and expansions\nAdvanced features: annotations, conversation tracking, edit history\nAll new endpoints and features\n\nGetting started:\n\nSign up at console.x.com\nCreate an app and get credentials\nMake your first request\n\nThe previous version of the X API. Limited support; use v2 for new projects.Still available:\n\nSome media upload endpoints\nLegacy streaming (deprecated)\nSome specialized endpoints\n\nMigrating to v2: See the migration guide for endpoint mapping and data format changes.\n\nHigh-volume access for businesses with advanced needs.Features:\n\nComplete firehose access\nHistorical data backfill\nDedicated support\nCustom rate limits\nCompliance streams\n\nContact enterprise sales →\n\nAvailable resources\n\nThe X API provides access to these resource types:\n\nPosts\n\nSearch, retrieve, create, and delete posts. Access timelines, threads, and quote posts.\n\nUsers\n\nLook up profiles, manage relationships, and access follower data.\n\nSpaces\n\nDiscover live audio conversations and participants.\n\nDirect Messages\n\nSend and receive private messages between users.\n\nLists\n\nCreate and manage curated lists of accounts.\n\nTrends\n\nAccess trending topics by location.\n\nv2 highlights\n\nFields and expansions\n\nRequest only the data you need. Use fields parameters to select specific attributes and expansions to include related objects.\n\ncurl \"https://api.x.com/2/tweets/123?tweet.fields=created_at,public_metrics\u0026expansions=author_id\u0026user.fields=username\" \\\n  -H \"Authorization: Bearer $TOKEN\"\n\n\nLearn more about fields →\n\nPost annotations\n\nPosts include semantic annotations identifying people, places, products, and topics. Filter streams and searches by topic.Learn more about annotations →\n\nEngagement metrics\n\nAccess public metrics (likes, reposts, replies) and private metrics (impressions, clicks) for your own posts.Learn more about metrics →\n\nConversation tracking\n\nEdit history\n\nAccess the edit history of posts, including all previous versions and edit metadata.Learn more about edit posts →\n\nPricing\n\nX API v2 uses pay-per-usage pricing:\n\nBenefit\tDescription\nNo subscriptions\tPay only for what you use\nCredit-based\tPurchase credits, deducted per request\nReal-time tracking\tMonitor usage in the Developer Console\nDeduplication\tSame resource requested twice in 24 hours is only charged once\n\nPay-per-usage plans are subject to a monthly cap of 2 million Post reads. If you need higher volume, consider an Enterprise plan.\n\nView pricing details →\n\nNext steps",
    "link": "https://docs.x.com/x-api/getting-started/about-x-api",
    "snippet": "Pay-per-usage pricing; Modern JSON response format; Flexible fields and expansions; Advanced features: annotations, conversation tracking, edit history ...",
    "title": "About the X API - X Developer Platform"
  },
  {
    "content_readable": "Understanding Twitter API Pricing Tiers and Alternatives\n\nTwitter's API pricing can be confusing. Between the free tier's harsh limitations, the Basic tier's constraints, and the jump to $5,000/month for Pro access, many developers struggle to find the right fit for their needs.\n\nThis guide breaks down exactly what you get at each Twitter API pricing tier, helps you calculate your actual costs, and explains when alternative approaches might serve you better.\n\nCurrent Twitter API Pricing Structure (2026)\n\nTwitter (now X) offers four primary API access tiers:\n\nFree Tier - $0/month\n\nWhat You Get:\n\n1 App ID\nTweet posting (up to 1,500 tweets/month)\n~1 request per 15 minutes for tweet retrieval\nUser lookup (basic)\nNo search functionality\n\nRate Limits:\n\nTweets: ~1/15 min read, 1,500/month write\nUsers: 100 requests/24 hours\n\nBest For:\n\nPersonal bots that post content\nTesting API connectivity\nHobby projects with minimal data needs\n\nNot Suitable For:\n\nAny meaningful data analysis\nBrand monitoring\nResearch projects\nProduction applications\n\nBasic Tier - $100/month\n\nWhat You Get:\n\n2 App IDs\n10,000 tweets read/month\nTweet posting (3,000 tweets/month)\n7-day search history only\nUser lookup and follows\nBasic search operators\n\nRate Limits:\n\nTweets: 10,000/month read\nSearch: 60 requests/15 min, 7-day limit\nUsers: 500 requests/24 hours\n\nBest For:\n\nSmall-scale monitoring projects\nSimple integrations\nLow-volume applications\n\nLimitations:\n\n7-day search limit kills historical analysis\n10K tweets runs out quickly for active monitoring\nNo streaming access\n\nPro Tier - $5,000/month\n\nWhat You Get:\n\n3 App IDs\n1,000,000 tweets/month\nFull archive search (all historical tweets)\nStreaming access\nAdvanced search operators\nHigher rate limits\n\nRate Limits:\n\nTweets: 1M/month\nSearch: 300 requests/15 min, full archive\nStreaming: 50 concurrent connections\n\nBest For:\n\nProfessional applications\nResearch at scale\nCompanies with dedicated social analytics\nApplications requiring historical data\n\nThe Problem: $5,000/month is a 50x jump from Basic with no intermediate option.\n\nEnterprise Tier - $42,000+/month\n\nWhat You Get:\n\nCustom limits based on negotiation\nDedicated account management\nHigher reliability SLAs\nPriority support\nCompliance features\n\nPricing:\n\nStarts at ~$42,000/month\nCustom quotes based on volume\nAnnual contracts typical\n\nBest For:\n\nLarge enterprises\nMission-critical applications\nHigh-volume data needs\nCompanies requiring compliance documentation\n\nCalculating Your Actual Costs\n\nHow to Estimate Tweet Volume\n\nBefore choosing a tier, estimate your monthly tweet needs:\n\nBrand Monitoring:\n\nSmall brand: 1,000-5,000 mentions/month\nMedium brand: 5,000-50,000 mentions/month\nLarge brand: 50,000-500,000+ mentions/month\n\nCompetitor Analysis:\n\nPer competitor: 500-5,000 tweets/month\n5 competitors: 2,500-25,000 tweets/month\n\nKeyword Tracking:\n\nNiche keyword: 100-1,000 tweets/day\nPopular keyword: 1,000-100,000+ tweets/day\n\nResearch Projects:\n\nThesis-level: 10,000-100,000 tweets\nLarge-scale study: 1M+ tweets\n\nCost Per Tweet Breakdown\n\nTier\tMonthly Cost\tTweets Included\tCost per 1K Tweets\nFree\t$0\t~100\tN/A\nBasic\t$100\t10,000\t$10.00\nPro\t$5,000\t1,000,000\t$5.00\nEnterprise\t$42,000+\tCustom\t~$2-4.00\n\nHidden Costs\n\nBeyond subscription fees, consider:\n\nDevelopment Time:\n\nAPI integration: 20-40 hours\nError handling: 10-20 hours\nMaintenance: 2-5 hours/month\n\nInfrastructure:\n\nServers for data processing\nDatabase storage\nMonitoring tools\n\nOverages:\n\nBasic: No overage option—hits stop when exhausted\nPro: Additional costs for exceeding limits\n\nThe Gap Problem\n\nTwitter's pricing creates an uncomfortable gap:\n\nBasic at $100/month gives you 10,000 tweets with a 7-day search limit\nPro at $5,000/month gives you 1,000,000 tweets with full archive\n\nThere's no middle ground. If you need:\n\nMore than 10,000 tweets/month, or\nHistorical search beyond 7 days\n\n...you're looking at a 50x price increase.\n\nThis gap has driven many users toward alternatives.\n\nWhen Alternatives Make Sense\n\nYou Need More Than Basic But Less Than Pro\n\nIf your needs fall between 10,000 and 1,000,000 tweets/month, alternatives often cost less than the $5,000 Pro tier.\n\nExample Scenario:\n\nNeed: 100,000 tweets/month\nOfficial API: $5,000/month (Pro tier required)\nThird-party alternative: $100-500/month typically\n\nYou Need Historical Data\n\nBasic tier's 7-day search limit is dealbreaking for:\n\nCompetitive analysis over time\nTrend research\nHistorical brand monitoring\nAcademic research\n\nAlternatives typically offer historical access at all price points.\n\nYou're Budget-Constrained\n\nMany legitimate use cases don't justify $5,000/month:\n\nAcademic research projects\nStartup MVPs\nSmall business monitoring\nPersonal projects\n\nYou Need Multi-Platform Data\n\nIf you're analyzing Twitter alongside Instagram, TikTok, or Reddit, managing multiple API integrations adds complexity. Some alternatives provide unified access.\n\nAlternative Options\n\nFor a comprehensive comparison of Twitter data providers, see our Best Twitter API Alternatives 2026 guide.\n\nThird-Party Data APIs\n\nServices that provide Twitter data through their own infrastructure:\n\nTypical Pricing: $0.10-0.50 per 1,000 tweets Advantages: Lower cost, historical access, simpler integration Considerations: Data coverage may not be 100%\n\nAI-Native Solutions (MCP)\n\nNatural language access through AI assistants is transforming how teams query social data. Learn more in our tutorial on How to Query Twitter with Claude AI.\n\nExample - Xpoz:\n\nFree: 100,000 results/month\nPro ($20/month): 1,000,000 results/month\nMax ($200/month): 10,000,000 results/month\n\nAdvantages: No coding required, multi-platform, exploratory-friendly Considerations: Requires AI assistant (Claude/ChatGPT)\n\nAcademic Programs\n\nTwitter has offered academic research access with elevated limits:\n\nCheck current availability (programs change)\nRequires institutional affiliation\nApplication and approval process\nRestrictions on commercial use\n\nHow Xpoz Addresses Pricing Gaps\n\nXpoz fills the gap between Twitter's Basic and Pro tiers with predictable, transparent pricing:\n\nPricing Comparison\n\nNeed\tOfficial API\tXpoz\n100K results/month\t$5,000 (Pro required)\t$0 (Free)\n1M results/month\t$5,000 (Pro)\t$20 (Pro)\n10M results/month\t$5,000+ (Pro+)\t$200 (Max)\n\nWhat You Get\n\nFree Tier ($0/month):\n\n100,000 results/month\nTwitter, Instagram, TikTok, Reddit access\nHistorical data\nCSV export\n\nPro Tier ($20/month):\n\n1,000,000 results/month\nAll platforms\nPriority processing\n\nMax Tier ($200/month):\n\n10,000,000 results/month\nAll platforms\nHighest throughput\n\nKey Differentiators\n\nNo Rate Limit Errors: Xpoz handles throttling internally Multi-Platform: One subscription covers Twitter, Instagram, TikTok, Reddit Natural Language: Query data through AI assistants without coding Historical Access: Available on all tiers, including free\n\nSample Queries\n\n\"Find all tweets mentioning 'product launch' from verified accounts\nin the past 30 days, sorted by retweet count\"\n\n\"How many times was 'competitor brand' mentioned on Twitter\nlast month vs. this month?\"\n\n\"Who are the top 100 accounts by follower count that discussed\n'AI regulation' this week?\"\n\n\nDecision Framework\n\nChoose Official API When:\n\nYou need write access (posting tweets, managing accounts)\nYou require Ads API integration\nYour industry has compliance requirements mandating official data sources\nYou need real-time streaming at scale\nBudget accommodates $5,000+/month\n\nChoose Alternatives When:\n\nYour needs fall between Basic and Pro volumes\nYou need historical data without Pro pricing\nYou want multi-platform coverage\nYou prefer natural language over API coding\nBudget is under $500/month\nYou only need read access (no posting)\n\nHybrid Approach\n\nMany organizations use both:\n\nOfficial API Basic ($100/month) for posting and account management\nAlternative for large-scale data retrieval and analysis\n\nThis captures write functionality while keeping read costs manageable.\n\nKey Takeaways\n\nTwitter API pricing has a significant gap between Basic ($100/month, 10K tweets) and Pro ($5,000/month, 1M tweets).\n\nBasic tier's 7-day search limit makes it unsuitable for historical analysis, competitive research, or trend tracking.\n\nAlternatives fill the pricing gap with options between $0-500/month for volumes that would otherwise require Pro.\n\nMulti-platform solutions like Xpoz provide Twitter, Instagram, TikTok, and Reddit access under one subscription.\n\nAI-native access eliminates API integration complexity for teams without dedicated developers.\n\nConsider total cost of ownership including development time, not just subscription fees.\n\nHybrid approaches work using official API for write access and alternatives for cost-effective read access.\n\nFrequently Asked Questions\n\nHow much does the Twitter API cost in 2026?\n\nTwitter API pricing has four tiers: Free ($0 with severe limits), Basic ($100/month for 10K tweets), Pro ($5,000/month for 1M tweets), and Enterprise ($42,000+/month for custom limits). The biggest gap is between Basic and Pro—a 50x price increase with no intermediate option.\n\nIs there a free Twitter API?\n\nYes, Twitter offers a free API tier, but it's extremely limited. You get approximately 1 request per 15 minutes for reading tweets, 1,500 tweets/month posting limit, and no search functionality. It's only suitable for personal bots or basic testing.\n\nWhy did Twitter API become so expensive?\n\nAfter Elon Musk's acquisition in late 2022, Twitter restructured API pricing to reduce free data access and increase revenue. The Basic tier went from free to $100/month, and the Pro tier jumped to $5,000/month—eliminating the previously affordable $99-299/month options.\n\nWhat are cheaper alternatives to Twitter's Pro API?\n\nThird-party data providers like Xpoz offer Twitter data access at a fraction of official API costs. For example, Xpoz provides 1M results/month for $20 (vs $5,000 for official Pro), with multi-platform coverage (Twitter, Instagram, TikTok, Reddit) included.\n\nConclusion\n\nTwitter's API pricing structure serves certain use cases well—particularly large enterprises needing write access and compliance guarantees. But for the vast majority of data retrieval needs, the gap between Basic and Pro creates unnecessary cost barriers.\n\nUnderstanding your actual requirements—tweet volume, historical needs, platform coverage—reveals whether the official API or an alternative approach makes more financial sense. For many users, solutions like Xpoz provide the data access they need at a fraction of official API costs, without sacrificing the functionality that matters for their use case.\n\nStart by estimating your monthly tweet volume, checking if you need historical access, and determining whether you require write functionality. That analysis typically makes the right choice clear.",
    "link": "https://www.xpoz.ai/blog/guides/understanding-twitter-api-pricing-tiers-and-alternatives/",
    "snippet": "Twitter API pricing has four tiers: Free ($0 with severe limits), Basic ($100/month for 10K tweets), Pro ($5,000/month for 1M tweets), and ...",
    "title": "Twitter API Pricing 2026: Tiers, Costs \u0026 Alternatives - Xpoz"
  },
  {
    "content_readable": "This is part one of the Advanced Use Cases series:\n\n1️⃣ Extract Metadata from Queries to Improve Retrieval\n\n2️⃣ Query Expansion\n\n3️⃣ Query Decomposition\n\n4️⃣ Automated Metadata Enrichment\n\nSometimes a single question is multiple questions in disguise. For example: “Did Microsoft or Google make more money last year?”. To get to the correct answer for this seemingly simple question, we actually have to break it down: “How much money did Google make last year?” and “How much money did Microsoft make last year?”. Only if we know the answer to these 2 questions can we reason about the final answer.\n\nThis is where query decomposition comes in. This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach:\n\nDecompose the original question into smaller questions that can be answered independently to each other. Let’s call these ‘sub questions’ here on out.\nReason about the final answer to the original question, based on each sub-answer.\n\nWhile for many query/dataset combinations, this may not be required, for some, it very well may be. At the end of the day, often one query results in one retrieval step. If within that one single retrieval step we are unable to have the retriever return both the money Microsoft made last year and Google, then the system will struggle to produce an accurate final response.\n\nThis method ensures that we are:\n\nretrieving the relevant context for each sub question.\nreasoning about the final answer given each answer based on the contexts retrieved for each sub question.\n\nIn this article, I’ll be going through some key steps that allow you to achieve this. You can find the full working example and code in the linked recipe from our cookbook. Here, I’ll only show the most relevant parts of the code.\n\n🚀 I’m sneaking something extra into this article. I saw the opportunity to try out the structured output functionality (currently in beta) by OpenAI to create this example. For this step, I extended the OpenAIGenerator in Haystack to be able to work with Pydantic schemas. More on this in the next step.\n\nLet’s try build a full pipeline that makes use of query decomposition and reasoning. We’ll use a dataset about Game of Thrones (a classic for Haystack) which you can find preprocessed and chunked on Tuana/game-of-thrones on Hugging Face Datasets.\n\nDefining our Questions Structure\n\nOur first step is to create a structure within which we can contain the subquestions, and each of their answers. This will be used by our OpenAIGenerator to produce a structured output.\n\nfrom pydantic import BaseModel\n\nclass Question(BaseModel):\n    question: str\n    answer: Optional[str] = None\n\nclass Questions(BaseModel):\n    questions: list[Question]\n\n\nThe structure is simple, we have Questions made up of a list of Question. Each Question has the question string as well as an optional answer to that question.\n\nDefining the Prompt for Query Decomposition\n\nNext up, we need to get an LLM to decompose a question and produce multiple questions. Here, we will start making use of our Questions schema.\n\nsplitter_prompt = \"\"\"\nYou are a helpful assistant that prepares queries that will be sent to a search component.\nSometimes, these queries are very complex.\nYour job is to simplify complex queries into multiple queries that can be answered\nin isolation to eachother.\n\nIf the query is simple, then keep it as it is.\nExamples\n1. Query: Did Microsoft or Google make more money last year?\n   Decomposed Questions: [Question(question='How much profit did Microsoft make last year?', answer=None), Question(question='How much profit did Google make last year?', answer=None)]\n2. Query: What is the capital of France?\n   Decomposed Questions: [Question(question='What is the capital of France?', answer=None)]\n3. Query: {{question}}\n   Decomposed Questions:\n\"\"\"\n\nbuilder = PromptBuilder(splitter_prompt)\nllm = OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions})\n\n\nAnswering Each Sub Question\n\nFirst, let’s build a pipeline that uses the splitter_prompt to decompose our question:\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question}})\nprint(result[\"llm\"][\"structured_reply\"])\n\n\nThis produces the following Questions (List[Question])\n\nquestions=[Question(question='How many siblings does Jamie have?', answer=None), \n           Question(question='How many siblings does Sansa have?', answer=None)]\n\n\nNow, we have to fill in the answer fields. For this step, we need to have a separate prompt and two custom components:\n\nThe CohereMultiTextEmbedder which can take multiple questions rather than a single one like the CohereTextEmbedder.\nThe MultiQueryInMemoryEmbeddingRetriever which can again, take multiple questions and their embeddings, returning question_context_pairs. Each pair contains the question and documents that are relevant to that question.\n\nNext, we need to construct a prompt that can instruct a model to answer each subquestion:\n\nmulti_query_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nAnd you have split the task into the following questions:\n{% for pair in question_context_pairs %}\n  {{pair.question}}\n{% endfor %}\n\nHere are the question and context pairs for each question.\nFor each question, generate the question answer pair as a structured output\n{% for pair in question_context_pairs %}\n  Question: {{pair.question}}\n  Context: {{pair.documents}}\n{% endfor %}\nAnswers:\n\"\"\"\n\nmulti_query_prompt = PromptBuilder(multi_query_template)\n\n\nLet’s build a pipeline that can answer each individual sub question. We will call this the query_decomposition_pipeline :\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\nquery_decomposition_pipeline.add_component(\"embedder\", CohereMultiTextEmbedder(model=\"embed-multilingual-v3.0\"))\nquery_decomposition_pipeline.add_component(\"multi_query_retriever\", MultiQueryInMemoryEmbeddingRetriever(InMemoryEmbeddingRetriever(document_store=document_store)))\nquery_decomposition_pipeline.add_component(\"multi_query_prompt\", PromptBuilder(multi_query_template))\nquery_decomposition_pipeline.add_component(\"query_resolver_llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"embedder.embeddings\", \"multi_query_retriever.query_embeddings\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"multi_query_retriever.queries\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"multi_query_retriever.question_context_pairs\", \"multi_query_prompt.question_context_pairs\")\nquery_decomposition_pipeline.connect(\"multi_query_prompt\", \"query_resolver_llm\")\n\n\nRunning this pipeline with the original question “Who has more siblings, Jamie or Sansa?”, results in the following structured output:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question}})\n\nprint(result[\"query_resolver_llm\"][\"structured_reply\"])\n\n\nquestions=[Question(question='How many siblings does Jamie have?', answer='2 (Cersei Lannister, Tyrion Lannister)'),\n           Question(question='How many siblings does Sansa have?', answer='5 (Robb Stark, Arya Stark, Bran Stark, Rickon Stark, Jon Snow)')]\n\n\nReasoning About the Final Answer\n\nThe final step we have to take is to reason about the ultimate answer to the original question. Again, we create a prompt that will instruct an LLM to do this. Given we have the questions output that contains each sub question and answer, we will make these inputs to this final prompt.\n\nreasoning_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nYou have split this question up into simpler questions that can be answered in\nisolation.\nHere are the questions and answers that you've generated\n{% for pair in question_answer_pair %}\n  {{pair}}\n{% endfor %}\n\nReason about the final answer to the original query based on these questions and\naswers\nFinal Answer:\n\"\"\"\n\nresoning_prompt = PromptBuilder(reasoning_template)\n\n\nTo be able to augment this prompt with the question answer pairs, we will have to extend our previous pipeline and connect the structured_reply from the previous LLM, to the question_answer_pair input of this prompt.\n\nquery_decomposition_pipeline.add_component(\"reasoning_prompt\", PromptBuilder(reasoning_template))\nquery_decomposition_pipeline.add_component(\"reasoning_llm\", OpenAIGenerator(model=\"gpt-4o-mini\"))\n\nquery_decomposition_pipeline.connect(\"query_resolver_llm.structured_reply\", \"reasoning_prompt.question_answer_pair\")\nquery_decomposition_pipeline.connect(\"reasoning_prompt\", \"reasoning_llm\")\n\n\nNow, let’s run this final pipeline and see what results we get:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question},\n                                           \"reasoning_prompt\": {\"question\": question}},\n                                           include_outputs_from=[\"query_resolver_llm\"])\n\nprint(\"The original query was split and resolved:\\n\")\n\nfor pair in result[\"query_resolver_llm\"][\"structured_reply\"].questions:\n  print(pair)\nprint(\"\\nSo the original query is answered as follows:\\n\")\nprint(result[\"reasoning_llm\"][\"replies\"][0])\n\n\n🥁 Drum roll please:\n\nThe original query was split and resolved:\n\nquestion='How many siblings does Jaime have?' answer='Jaime has one sister (Cersei) and one younger brother (Tyrion), making a total of 2 siblings.'\nquestion='How many siblings does Sansa have?' answer='Sansa has five siblings: one older brother (Robb), one younger sister (Arya), and two younger brothers (Bran and Rickon), as well as one older illegitimate half-brother (Jon Snow).'\n\nSo the original query is answered as follows:\n\nTo determine who has more siblings between Jaime and Sansa, we need to compare the number of siblings each has based on the provided answers.\n\nFrom the answers:\n- Jaime has 2 siblings (Cersei and Tyrion).\n- Sansa has 5 siblings (Robb, Arya, Bran, Rickon, and Jon Snow).\n\nSince Sansa has 5 siblings and Jaime has 2 siblings, we can conclude that Sansa has more siblings than Jaime.\n\nFinal Answer: Sansa has more siblings than Jaime.\n\n\nWrapping up\n\nGiven the right instructions, LLMs are good at breaking down tasks. Query decomposition is a great way we can make sure we do that for questions that are multiple questions in disguise.\n\nIn this article, you learned how to implement this technique with a twist 🙂 Let us know what you think about using structured outputs for these sorts of use cases. And check out the Haystack experimental repo to see what new features we’re working on.",
    "link": "https://haystack.deepset.ai/blog/query-decomposition",
    "snippet": "This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach.",
    "title": "Advanced RAG: Query Decomposition \u0026 Reasoning - Haystack"
  },
  {
    "content_readable": "Updated\n\n4 days ago\n\nWith millions of conversations happening all over the web each day, it can be a long and tedious task trying to get more relevant mentions and tighten the scope of your query, but with the help of Advanced Topic Query, it can be at your fingertips.\n\nIn Social Listening, you have the option to create an advanced query that is not limited to ANY, ALL, or NONE formatting of query building. Advanced query builder can be used to form complex text queries which are not possible with a normal query builder.\n\nWhat is an Advanced Topic Query?\n\nAdvanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more.\n\nBy using advanced query you can pinpoint relevant information which is not possible with basic topic query.\n\nIt gives you the power to find the needle in a haystack.\n\n​\n\nBasic Topic Query v/s Advanced Topic Query\n\nWith more operators to use you can fetch conversations by language, geography, social media channel, volume, author, #listening, @account monitoring, user segment, and much more, it can give you access to more actionable insights.\n\nIn Basic Query, you can only use boolean operators like OR/ NOT/ AND/ along with NEAR. On the other hand, in Advanced Topic Query, it gives you access to use OR with/ inside AND, NOT (nested and within operator use cases), advanced operators, exact match operators etc.\n\nLet's see the use cases where advanced query will help in getting more insightful mentions –\n\nUse case #1: To search \"pepsi\" OR \"drink\" along with \"cups\".\n\nBasic Query\n\nAdvancd Query\n\nUse case #2: To get mentions of \"pepsi\" along with \"coke\" or \"sprite\" but not \"miranda\" with people having \"follower count\" between 100 to 1000 on \"twitter\".\n\nBasic Query\n\nAdvanced Query\n\nNot feasible in the basic Topic query\n\nThis is where we need the advanced Topic query.​\n\nHow to create an advanced Topic query?\n\nClick the New Tab icon. Under Sprinklr Insights, Click Topics within Listening.\n\nOn the Topics window, click Add Topic in the top right corner. Fill in the required fields and click Create.\n\nIn the Setup Query tab of Create New Topic window, select Advanced Query in the query section.\n\n​\n\nType your query in the Advanced Query field with the required operators and syntax.\n\nClick Save.\n\nTip: While using Instagram as a Listening Source, be sure that your query keywords include hashtags.\n\nWhich operators to use for building Topic queries?\n\nOperators for Topic queries\n\nIn creation of advanced queries along with boolean operators OR/ AND/ NOT/ etc, Sprinklr also supports operator types –\n\nSearch Operators\n\nExact Match Operators\n\nOperators for Getting Post Replies/Comments​\n\nSprinklr provides its user edge by giving them power to use Keywords List inside advanced query along with Operators mentioned.\n\nCreate query using Topic query operators\n\nFollowing are some most used operator examples and their results –\n\nOperator\n\nExample\n\nResult\n\nhello\n\nSearch for the term \"hello\"\n\nsocial sprinklr\n\nSearch for the phrases \"social\" and \"sprinklr\"\n\n​\n\nNote: Using this will show preview but topic can not be saved as it will show error, Use \"Social Sprinklr\" or (Social AND/OR/ NOT/ NEAR Sprinklr) to eliminate error.\n\nAND\n\nsocial AND sprinklr\n\nSearch for \"social\" and \"sprinklr\" anywhere within the complete message, irrespective of keywords between them\n\nOR\n\nsocial OR sprinklr\n\nSearch for \"social\" or \"sprinklr\"\n\nNOT\n\n\"social media\" NOT \"facebook\"\n\nSearch for results that contain \"social media\" but not \"facebook\"\n\n~\n\n\"social media\"~10\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNEAR\n\nsocial NEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNote: This operator can be used with keyword lists.\n\nONEAR\n\nsocial ONEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other in an ordered way\n\nNote: This operator searches social ahead of media.\n\ntitle\n\ntitle: (\"social media\")\n\nSearch for social media in the title of the message\n\nNote: It is mostly used for News, blogs, reviews and other sites.\n\nauthor\n\nauthor: \"social_media\"\n\nFetches all the mentions from author name: social_media\n\nSome other operators which are supported by Sprinklr are –\n\nProximity: It is used to define proximity or distance between 2 keywords only, whereas, NEAR can be used to define proximity between two keywords as well as keyword lists.\n\nOnear (Ordered Near): It sets the order in which the keywords will appear. For example, Keyword-List1 ONEAR/10 Keyword-List2 will ensure keywords from Keyword-List1 appear first and then Keyword-List2 keywords will follow within space of maximum 10.\n\nStep by step guide to make advanced Topic query\n\nUse case\n\nTo write query fetching mentions of ZARA –\n\n​\n\n(# listening is used for instagram listening)\n\nGetting mention along with clothing or fashion related terms only –\n\nRemoving profanity from mention (use case specific) –\n\nRemoving profanity from mention (use case specific) –\n\nAs social media has lots of profane words you can also remove it by making a keyword list and negating it from query –\n\nFiltering Mentions in English –\n\n​\n\nApplying source input as Twitter –\n\nGetting mentions of those users which have followers between 100 to 1000 –\n\n​\n\nAdvanced example showcasing use of Topic query operators and keyword list –\n\nBest practices while using Advanced Query\n\nUse of Parentheses\n\n​Parentheses are not necessary to enclose a search query but can be useful while grouping operations together for more complex queries.\n\n​\n\nFor example, if you want to return results that mention Samsung or Apple phones, and also want to query content that mentions phones along with either Apple or Samsung, you could use parentheses around Apple and Samsung to group three keywords together, as shown below –\n\nphone AND (Apple OR Samsung)\n\n​\n\nUse of parentheses within brackets, is further explained below with an example –\n\n[(internet of things ~3) OR iot OR internetofthings) AND (robots OR robot OR #robot)] NOT [things]\n\nTip: You can also use parentheses within brackets to set off additional operations within the Advanced Query field. The end result should look similar to the result summary of a basic query, built using multiple operations within a single section.\n\n\nAs a part of the rest of the query, this will perform the following operations –\n\nSearch for posts that contain the phrase \"internet of things\" or \"#internetofthings\"\n\nFrom within those results, keep any result that also says \"robots\" or \"robot\" or \"#robot\" within three words (a proximity search) of either \"internet of things\" or \"iot\" or \"internetofthings\".\n\nDiscard any results that just have the phrase \"things\" within.\n\nParentheses nested within brackets intend to set off different operations as isolated processes. In the previous example, if you build an Advanced Query that states [(internet of things OR iot OR internet of things) AND (robots OR robot OR #robot)] your query will return results that contain ANY of the first three terms and the second three terms.\n\nHowever, if you build an Advanced Query that states [internet of things OR iot OR internet of things AND robots OR robot OR #robot], your query will return any result that contains the phrase \"internet of things\" or the word \"iot\" or the word \"robot\" or the hashtag #robot or specifically the phrase \"internet of things\" within the same message as the word \"robots\".\n\nNote:\n\nYou cannot use a \"NOT\" statement with an \"OR\" statement.\n\n\nExample:\n( social OR NOT media ) ❌\n( social NOT media ) ✅\n\n(( social OR ( media NOT facebook )) ✅\n\nWhy?\n\nQuery should not contain \"NOT\" terms in \"OR\" with other terms, \"NOT\" clauses should be used in \"AND\" with other terms, using \"NOT\" in \"OR\" will bring too much data.\n\nUse of Quotation marks\n\nQuotation marks can be used for phrases in which you are looking for an exact match of those particular words in a specific order. Using parentheses or quotation marks for single-word queries is not mandatory.\n\nUse straight quotation marks ( \" \" ) for outlining phrases within it. The use of curved quotation marks (“ ”) will not produce your desired results.\n\nParentheses are generally used to group keywords or phrases joined by one or more operators together, but with other keywords involved, parentheses and quotations would act differently. For example –\n\nVersion 1: \"Phil Schiller\" AND \"Apple Marketing\" will return results for content with the exact phrase Phil Schiller (or phil schiller) and the exact phrase Apple Marketing (or apple marketing).\n\nNote: Here exact does not mean case sensitive as in the case of exactMessage Operator.\n\nExample: exactMessage: (\"Phil Schiller\" AND \"Apple Marketing\"), which will fetch results for phrase Phil Schiller (not phil schiller) and the exact phrase Apple Marketing (not apple marketing).\n\n\nVersion 2: \"Phil Schiller\" AND (Apple OR Marketing) will return results for content with the phrase \"Phil Schiller\" (together) and at least one of the words, Apple or Marketing.\n\nHandling for Broad \u0026 Ambiguous Keywords\n\nIt is very important to not use/reduce use of broad keywords in advanced queries. Broader keywords will fetch mentions that are unrelated to topic of interest, and eventually hinder dashboard/insights\n\nFor all keywords used in an advanced topic query, ensure they are directly related to the topic of interest.\n\nIn case keywords are broad but relevant to topic, they should be tied to some relevant keywords related to that topic, by using NEAR Operators\n\nExample: Robot is an important keyword for Robot Company. However just using this keyword will fetch irrelevant keywords as it’s a broad keyword used for other entities as well (Robot Street, etc).\n\nInstead of using just Robot keyword, we should use: Robot NEAR/4 (Technology OR “machine” OR # tech OR IOT OR “Internet of things” ….)\n\nNote how keywords related to Robot are used with NEAR Operator. Related keywords could be related entities, industry keywords, parent company, country keywords, etc.\n\nFrequently asked questions\n\n​\n\nIs it compulsory to put quotation marks around phrases like \"apple music\" or can we use apple music directly?\n\nHow can I eliminate posts with many spam #’s or @’s?\n\nCan exact match or parent operators be used in advanced query?\n\nWhy am I able to see mentions in preview during making of topic but not in dashboard?\n\nDuring listening to @ mentions a lot of spam mentions are also getting tagged along, e.g. like wanting to get mentions of @tom but messages of @tom_fan56 are also coming. How to remove these irrelevant mentions?\n\nIf I write query as “tom” will it also fetch mentions such as tom_jerry / @tom / #tom ?\n\n​",
    "link": "https://www.sprinklr.com/help/articles/faqs-and-advanced-usecases/create-an-advanced-topic-query/646331628ea3c9635cf36711",
    "snippet": "Advanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more. By ...",
    "title": "‎Create an Advanced Topic Query | Sprinklr Help Center"
  },
  {
    "content_readable": "July 8, 2021, 7:10pm 1\n\nHello! I’m struggling to figure out how to write a query to get all the todos on a particular page. I thought there might be a generic way to do this for any page, but I’m even struggling to get it to work for a specific page using the page’s name. I think part of it is that I’m just learning the query syntax. Here’s what I’ve tried and learned:\n\n{{query todo}} gives all the todos in my entire database\nThen I tried to use a filter by page: {{query todo [[the book]] }} (“the book” is an alias to the page I’m trying to collect todos from). This is giving the same results as #1, which surprised me.\nI thought, maybe I need to explicitly use logical expressions to tie the parts of the query together. So I tried {{query (and todo [[the book]]) }}, which was empty.\n\nI’d appreciate any help filling the gaps in my knowledge and creating a query that collects todos on a given page.\n\n5 Likes\n\nc6p July 23, 2021, 2:02pm 2\n\nQuery works for me when To-Do states explicitly specified {{ query (and (todo todo doing) [[the book]]) }}.\n\n1 Like\n\njafish July 23, 2021, 3:21pm 3\n\nThank you @c6p, that worked! Now, any ideas on making this a more general query that will work on any page? I see that for templates there is a “current page” variable, but that doesn’t seem to apply to queries. I’ve also noticed some references to current page in the advanced query syntax, so maybe that’s where I need to be looking…\n\n1 Like\n\nc6p July 23, 2021, 5:56pm 4\n\nCheck out https://logseq.github.io/#/page/dynamic%20variables\n\n{{query (and (todo todo doing) (page \u003c% current page %\u003e))}} grabs To-dos for current page.\n\n10 Likes\n\njafish July 24, 2021, 1:24am 5\n\nThanks, I missed the “syntax” line on that documentation page when I was looking at it. I appreciate you taking the time to reply and share!\n\nLuhmann July 24, 2021, 2:05am 6\n\nSee my feature request for a “page specific NOW section”.\n\n1 Like\n\nsabre23t July 25, 2021, 12:32am 7\n\nShouldn’t this later reply by c6p be marked the solution @jafish?\n\n2 Likes\n\njafish July 25, 2021, 1:29am 8\n\nYes, thanks! I didn’t realize you could only mark one solution. I had marked them both, in reverse order, and, well, you saw what happened\n\n1 Like\n\nHulk September 6, 2021, 5:21am 9\n\nThis solution does not work for me. I have even copied and pasted your syntax directly in. Weird…\n\nc6p September 17, 2021, 7:40pm 10\n\nIt works for me. Could you make it work? Or what is your output? If you still have problems, you can get quicker responses on discord.\n\nHulk September 21, 2021, 7:18pm 11\n\nOkay. It’s working! I think it must have been another part of my query which went wrong.\n\n1 Like\n\nThanks! This works fine on my page (and is another game changer vs. Roam, where I just come from). However, when opening the same page in the sidebar the query gives me the TODOs from the page opened in my main view. Is this intended behavior or a bug?\n\nxavon January 16, 2024, 8:14pm 13\n\nis possible to target alias with dynamic variable ?\nbecause using \u003c% current page %\u003e , it only get the title of the page\n\nmentaloid January 16, 2024, 8:27pm 14\n\nThis should be possible with Advanced Queries, using :current-page and :block/alias .\n\nxavon January 16, 2024, 8:34pm 15\n\nmay i ask, if you could help me on that query ?\n\ni have a simple query that is inside a template:\n{{query (page-property -zoom \u003c% current page %\u003e )}}\n\ntried this but didnt work:\n{{query (page-property -zoom or(:current-page :block/alias) )}}\n\nmentaloid January 16, 2024, 8:44pm 16\n\nAs said, this needs Advanced Queries. You need to:\n\nfollow the link\ndo some reading\nexperiment and familiarize with advanced queries\ncome back when you have an advanced query to work with\n\nxavon January 16, 2024, 9:39pm 17\n\nshure i tried, but didnt get results:\n\n#+BEGIN_QUERY\n{\n:query [:find (pull ?p [*])\n:where\n(or\n(property ?p :-zoom :current-page)\n(property ?p :-zoom :block/alias)\n)]}\n#+END_QUERY\n\nso in the current page that goes that query , i want to find all the other pages that have the -zoom property that contains the title or alias of the page that goes the query\n\nmentaloid January 16, 2024, 10:56pm 18\n\nHere are two options:\n\n#+BEGIN_QUERY\n{\n :query [:find (pull ?p [*])\n   :in $ ?current-name\n   :where\n     (or-join [?p ?current-name]\n       (property ?p :-zoom ?current-name)\n       (and\n         [?current :block/name ?current-name]\n         [?current :block/alias ?alias]\n         [?alias :block/name ?alias-name]\n         (property ?p :-zoom ?alias-name)\n       )\n     )\n ]\n :inputs [:current-page]\n}\n#+END_QUERY\n\n\n#+BEGIN_QUERY\n{\n :query [:find (pull ?p [*])\n   :in $ ?current-name\n   :where\n     [?current :block/name ?current-name]\n     (or-join [?current ?name]\n       [?current :block/name ?name]\n       (and\n         [?current :block/alias ?alias]\n         [?alias :block/name ?name]\n       )\n     )\n     (property ?p :-zoom ?name)\n ]\n :inputs [:current-page]\n}\n#+END_QUERY\n\n\nxavon January 17, 2024, 12:00am 19\n\nyou are good on this bro.\nThe second one works, is querying correct , i just have one doubt on the way the information is returning\nI see that you use in the query,\n\n[:find (pull ?p [*])\n\n\nbut it is returning me the block instead of the page name, why ? i though the ?p would return me only the page name (that is what im seeking)\n\nmentaloid January 17, 2024, 8:02am 20\n\n?p is defined as a block (or actually something that has properties).\n\nIt participates in the query as a mere number from the database.\nTo get its name, add [?p :block/name ?p-name]\n\nThat will also limit the results to pages, as only page-blocks have a name.\nThen to list that name, replace (pull ?p [*]) with ?p-name",
    "link": "https://discuss.logseq.com/t/query-todos-on-current-page/1481",
    "snippet": "I'm struggling to figure out how to write a query to get all the todos on a particular page. I thought there might be a generic way to do this for any page.",
    "title": "Query Todos on Current Page - Questions \u0026 Help - Logseq"
  },
  {
    "content_readable": "Introduced in 2020, the GitHub user profile README allow individuals to give a long-form introduction. This multi-part tutorial explains how I setup my own profile to create dynamic content to aid discovery of my projects:\n\nwith the Liquid template engine and Shields (Part 1 of 4)\nusing GitHub's GraphQL API to query dynamic data about all my repos (keep reading below)\nfetching RSS and Social cards from third-party sites (Part 3 of 4)\nautomating updates with GitHub Actions (Part 4 of 4)\n\nYou can visit github.com/j12y to see the final result of what I came up with for my own profile page.\n\nThe GitHub Repo Gallery\n\nThe intended behavior for my repo gallery is to create something similar to pinned repositories but with a bit more visual pizzazz to identify what the projects are about.\n\nIn addition to source code, the repo can have metadata associated with it:\n\n✔️ Name of the repository\n✔️ Short description of the project\n✔️ Programming language used for the project\n✔️ List of tags / topics\n✔️ Image that can be used for social cards\n\nAbout\n\nThe About has editable fields to set the description and topics.\n\nSettings\n\nThe Settings includes a place to upload an image for social media preview cards.\n\nIf you don't set a preview card image, GitHub will generate one automatically that includes some basic profile statistics and your user profile image.\n\nGetting Started with the GitHub REST API\n\nThe way I structured this project is to build a library of any functions related to querying GitHub in src/gh.ts. I used a .env file to store my personal access (classic) token for authentication during local development.\n\n├── package.json\n├── .env\n├── src\n│   ├── app.ts\n│   ├── gh.ts\n│   └── template\n│       ├── README.liquid\n│       ├── contact.liquid\n│       └── gallery.liquid\n└── tsconfig.json\n\n\nI started by using REST endpoints with the Octokit library and TypeScript bindings.\n\n// src/gh.ts\nimport { Octokit } from 'octokit';\nimport { RestEndpointMethodTypes } from '@octokit/plugin-rest-endpoint-methods'\nconst octokit = new Octokit({ auth: process.env.TOKEN});\n\nexport class GitHub {\n    // GET /users/{user}\n    // https://docs.github.com/en/rest/users/users#get-a-user\n    async getUserDetails(user: string): Promise\u003cRestEndpointMethodTypes['users']['getByUsername']['response']['data']\u003e {\n        const { data } = await octokit.rest.users.getByUsername({\n            username: user\n        });\n\n        return data;\n    };\n}\n\n\nFrom src/app.ts I initialize the GithHub class, fetch the results, and can inspect the data being returned as a way to get comfortable with the various endpoints.\n\n// src/app.ts\nimport dotenv from 'dotenv';\nimport { GitHub } from \"./gh\";\n\nexport async function main() {\n  dotenv.config();\n  const gh = new GitHub()\n\n  const details = await gh.getUserDetails();\n  console.log(details);\n}\nmain();\n\n\nI typically get started on projects with simple tests like this to make sure all the various pieces to an integration can be configured and work together before getting too far.\n\nUse the GitHub GraphQL Endpoint\n\nTo get the data needed for the gallery layout, it would be necessary to make multiple calls to REST endpoints. In addition there is some data not yet available from the REST endpoint at all.\n\nSwitching to query using the GitHub GraphQL interface becomes helpful. This single endpoint can process a number of queries and give precise control over the data needed.\n\n💡 The GitHub GraphQL Explorer was fundamentally useful for me to get the right queries defined\n\nThis query needs authorization with the personal access token to fetch profile details about followers similar to some of the details returned from the REST endpoints.\n\n// src/gh.ts\n\nconst { graphql } = require(\"@octokit/graphql\")\n\nexport class GitHub \n    // https://docs.github.com/en/graphql\n    graphqlWithAuth = graphql.defaults({\n        headers: {\n            authorization: `token ${process.env.TOKEN}`\n        }\n    })\n\n    async getProfileOverview(name: string): Promise\u003cany\u003e {\n        const query = `\n            query getProfileOverview($name: String!) { \n                user(login: $name) { \n                    followers(first: 100) {\n                        totalCount\n                        edges {\n                            node {\n                                login\n                                name\n                                twitterUsername\n                                email\n                            }\n                        }\n                    }\n                }\n            }\n        `;\n        const params = {'name': name};\n\n        return await this.graphqlWithAuth(query, params);\n    }\n}\n\n\nThere are other resources such as Learn GraphQL if you haven't written many queries yet which explains the basics around syntax, schemas, and types.\n\nGetting used to GitHub's GraphQL schema primarily involves walking a series of edges to find linked nodes for objects of interest and their data attributes. In this case, I started by querying a user profile, finding the list of linked followers, and then inspecting their corresponding node's login, name, and email address.\n\n   ┌────────────┐\n   │    user    │\n   └─────┬──────┘\n         │\n         └──followers\n               │\n               ├─── totalCount\n               │\n               └─── edges\n                     │\n                     └── node\n\n\n\nFaceted Search by Topic Frequency\n\nI often want to find repositories by a topic. The user interface makes it easy to filter among many repositories by programming language such as python but unless you know which topics are relevant can become hit or miss. Was it nlp or nltk I used to categorize related repositories. Did I use dolby or dolbyio to identify repos I have for work projects.\n\nA faceted search that narrows down the number of matching repositories can be helpful for finding relevant projects like this. Given topics on GitHub are open-ended and not constrained to fixed values, it can be easy to accidentally categorize repos with variations like lambda and aws-lambda such that searches only identify partial results.\n\nTo address this, a GraphQL query gathering topics by frequency of usage within an organization or individual account can help with identifying the most useful topics.\n\nThe steps for this would be:\n\nQuery repository topics\nProcess results to group topics by frequency\nUse a template to render the gallery\n\n1 - Query Repository Topics\n\nI used the following GraphQL query to fetch my repositories and their corresponding topics.\n\nconst query = `\n    query getReposOverview($name: String!) {\n        user(login: $name) {\n            repositories(first: 100 ownerAffiliations: OWNER) {\n                edges {\n                    node {\n                        name\n                        url\n                        description\n                        openGraphImageUrl\n                        repositoryTopics(first: 100) {\n                            edges {\n                                node {\n                                    topic {\n                                        name\n                                    }\n                                }\n                            }\n                        }\n                        primaryLanguage {\n                            name\n                        }\n                    }\n                }\n            }\n        }\n    }\n`;\n\n\nThis query starts by filtering by user owned repositories (not counting forks) along with the metadata such as the social image.\n\n2 - Process Results and Group Topics by Frequency\n\nIterating over the results of the query the convention used was to look for anything with the topic github-gallery as something to be featured in the gallery. We also get a count of usage for each of the other topics and programming languages.\n\nvar topics: {[id: string]: number } = {};\nvar languages: {[id: string]: number } = {};\nvar gallery: {[id: string]: any } = {};\n\nconst repos = await gh.getReposOverview(user);\nfor (let repo of repos.user.repositories.edges) {\n  // Count occurrences of each topic\n  repo.node.repositoryTopics.edges.forEach((topic: any) =\u003e {\n    if (topic.node.topic.name == 'github-gallery') {\n      gallery[repo.node.name] = repo;\n    } else {\n      topics[topic.node.topic.name] = topic.node.topic.name in topics ? topics[topic.node.topic.name] + 1 : 1;\n    }\n  });\n\n  // Count and include count of language used\n  if (repo.node.primaryLanguage) {\n    languages[repo.node.primaryLanguage.name] = repo.node.primaryLanguage.name in languages ? languages[repo.node.primaryLanguage.name] + 1 : 1;\n  }\n}\n\n\n3 - Use a template to render the gallery\n\nThe topics are ordered by how often they are used. From the previous post on setting up a dynamic profile, I'm passing scope to the liquid engine for any data to be made available in a template.\n\n  // Share topics sorted by frequency of use for filtering repositories\n  // from the organization\n  scope['topics'] = Object.entries(topics).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n  scope['languages'] = Object.entries(languages).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n\n  // Gather topics across repos\n  scope['gallery'] = Object.values(gallery);\n\n\n\nThe repository page on GitHub uses query parameters to sort and filter, so items like topic:nltk can be passed directly in the URL to load a filtered view of repositories. The shields create a nice looking button for navigating to the topic, and use of icons for programming languages helps find relevant code samples.\n\n\u003cp\u003eExplore some of my projects: \u003cbr/\u003e\n{% for language in languages %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=language%3A{{language[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ language[0] }}-{{ language[1] }}-lightgrey?logo={{ language[0] }}\u0026label={{ language[0] }}\u0026labelColor=000000\" alt=\"{{ language[0] }}\"/\u003e\u003c/a\u003e {% endfor %}\n{% for topic in topics %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/static/v1?label={{topic[0]}}\u0026message={{ topic[1] }}\u0026labelColor=blue\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\n\nThe presentation includes a 3-column row in a table for displaying the metadata about each featured gallery project. This could display all repositories, but limiting to one or two rows seems sensible for managing screen space.\n\n{% for tile in gallery limit:3 %}\n\u003ctd width=\"25%\" valign=\"top\" style=\"padding-top: 20px; padding-bottom: 20px; padding-left: 30px; padding-right: 30px;\"\u003e\n\u003ca href=\"{{ tile.node.url }}\"\u003e\u003cimg src=\"{{ tile.node.openGraphImageUrl }}\"/\u003e\u003c/a\u003e\n\u003cp\u003e\u003cb\u003e\u003ca href=\"{{ tile.node.url }}\"\u003e{{ tile.node.name }}\u003c/b\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e{{ tile.node.description }}\u003cbr/\u003e\n{% for topic in tile.node.repositoryTopics.edges %} \u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic.node.topic.name }}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ topic.node.topic.name | replace: \"-\", \"--\" }}-blue?style=pill\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\u003c/td\u003e\n{% endfor %}\n\n\nWith all of that put together, we now have a gallery that displays a picture along with the name, description, and tags. The picture can highlight a user interface, architectural diagram, or some other branded visual to help identify the purpose of the project visually.\n\nWe can also use this to maintain our list of topics and make finding relevant topics for an audience easier to discover.\n\nLearn more\n\nI hope this overview helps with getting yourself sorted. The next article will dive into some of the other ways of aggregating content.\n\nFetching RSS and Social Cards for GitHub Profile (Part 3 of 4)\nAutomating GitHub Profile Updates with Actions (Part 4 of 4)\n\nDid this help you get your own profile started? Let me know and follow to get notified about updates.",
    "link": "https://dev.to/j12y/query-github-repo-topics-using-graphql-35ha",
    "snippet": "Creating a customized user profile page for GitHub to showcase work projects and make navigation to relevant topics easier.",
    "title": "Query GitHub Repo Topics Using GraphQL - DEV Community"
  },
  {
    "content_readable": "December 22, 2023, 10:47am 1\n\nI have variables as shown below. What is the explanation for the fact that I can access both pages with query parameters in Staging environment, but not in production for the one that I created recently? (I can see parameters on the page created 1 month ago) When I look at the related topics in the forum, it is recommended not to use url path. That’s why I am trying this way. Is it bug caused by WeWeb or am I missing something? Is there a period of 30 minutes, 1 hour or a few days after publishing? In order for the URLs to be updated in a way that can come with parameters. Or do you need to make settings from a different place to display the parameters in the urls, such as project settings?\n\nJoyce (Joyce) December 22, 2023, 4:02pm 2\n\nHi @mert.incesu\n\nCan you provide us with more information about how things are set up at page level, maybe record a short video?\n\nNot sure where you read this but there are pros and cons to both approaches (URL path and query) as detailed in this academy video about dynamic collection pages.\n\nNope, the change should be live as soon as the app finished publishing.\n\nI’m not sure how things are set up on your side. If the academy video above doesn’t help, could you record a video so I can better understand what you’re trying to do and hopefully point you in the right direction?\n\n1 Like\n\nbenFortunato (ben@mybuildplan.co) March 11, 2024, 3:09pm 3\n\nLooking through the video but I could find anything that talks about the difference between using url parameters in the format https://domain/page?param1=value1\u0026param2=value2 and using a unique page ID through a parameter.\n\nI’m trying to set up a filter that generates query parameters in the url so that way I can send a QR code that links to a specific search or users can share a specific view or search.\n\nSince parameter are internal to the app there is no need to pass parameters through a url string if the user is navigating through the site. Once the link is shared we would need to pull the parameters from the url string.\n\nFigured out how to add a “parameter” to the url through the page settings but that just adds text to create a unique page name.\n\nHow can I create a set of proper url parameters that I can use to set variables?\n\nliam_putra (William anputra) July 21, 2024, 4:10pm 4\n\nNo update on this from the weweb team ?\n\n1 Like\n\njptrinh (Jean-Philippe Trinh)\n\nJuly 24, 2024, 1:07pm 5\n\nHey @benFortunato @liam_putra\n\nSorry for the delay, hope this answer martches what you are trying to achieve!\n\nTo access page parameters, you have access to them in the Formula panel:\n\nHere is how to set Query parameters\n\nYou can set query variables from the Data/Variables panel.\nWhen you do that, from any link, you can set those variable by adding an item in the Queries section. Or set the value directly from the url.\nOn the page where you want to use the value of the query param, you can bind the variable as you would with any variable. Or use the query variable in Browser/query/yourParam.\n\n\n2 Likes\n\nbenFortunato (ben@mybuildplan.co) July 24, 2024, 2:02pm 6\n\nI would add to this a quick tip that triped me up. If you create a page that needs query parameters to load, it won’t show up in the weweb editor unless you manually put those parameter in the URL bar. For example if you have the parameter Myparameter1=value1 you will need to navigate to the weweb editor page and manually append those to the URL. For example: https://weweb-editor-url should be https://weweb-editor-url?myparameter1=value1\u0026myparameter2=value 2\n\nWhen you hit enter the page will reload and weweb will be able to load the parameter from the browser into the editor just like it will when you publish the site.\n\nHope that helps, that took me a while to figure out.\n\n2 Likes\n\nliam_putra (William anputra) July 26, 2024, 4:03am 7\n\nhmmm… how did you avoid that ?\n\nit might not take too much time to keep manually append those parameter on the URL, just wondering if theres more an automated solution ?\n\nliam_putra (William anputra) July 26, 2024, 4:03am 8\n\njptrinh (Jean-Philippe Trinh) July 26, 2024, 8:04am 9\n\nI modified my previous answer a bit. Got mixed up with Page parameters and Query parameters!\nIf you want multiple query parameters, you should use the second method\n\nAnd you can set a worfklow and change the value of your query variables to quickly set them!\n\nAntiokh (Anton) May 28, 2025, 3:48am 10\n\nThat’s fantastic! No answer to this question since Mar’2024.\n\nthe answer is: use ?wwParam-**param=test123**\n\n1 Like",
    "link": "https://community.weweb.io/t/query-parameters-in-url/6074",
    "snippet": "I'm trying to set up a filter that generates query parameters in the url so that way I can send a QR code that links to a specific search or users can share a ...",
    "title": "Query parameters in url - Ask us anything - WeWeb Community"
  }
]
Output (3151 bytes)
[
  {
    "text": "# Headline\nX (Twitter) now uses a credit-based, pay‑per‑usage API model: buy credits in the Developer Console, pay per endpoint/use, monitor usage in real time, and contact Enterprise for high-volume needs — note a 2M Post‑reads monthly cap and deduplication rules that can affect billing.\n\n## Most relevant sources\n1. **Pricing - X - X Developer Platform** — https://docs.x.com/x-api/getting-started/pricing\n   - Official pricing doc: describes a credit‑based, per‑endpoint pay‑per‑usage model (buy credits in the Developer Console), 24‑hour deduplication (same resource charged once per UTC day), auto‑recharge and spending limits, rewards of up to 20% xAI credits, and a monthly cap of 2 million Post reads (Enterprise for higher volume).\n\n2. **Programmatic access to X's posts, users, spaces, and more - X API** — https://docs.x.com/x-api/introduction\n   - Platform overview: confirms pay‑per‑usage pricing, recommends X API v2 for new projects, and summarizes capabilities (search, filtered stream, full‑archive search, fields/expansions and metrics) and quick‑start steps for Developer Console credentials.\n\n3. **Announcing the Launch of X API Pay‑Per‑Use Pricing** — https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476\n   - Official announcement/transition note: signals the rollout of pay‑per‑use pricing and indicates legacy free users may be transitioned (snippet references a one‑time $10 voucher for legacy free tier users being moved to pay‑per‑use).\n\n4. **About the X API - X Developer Platform** — https://docs.x.com/x-api/getting-started/about-x-api\n   - Feature and billing summary: reiterates pay‑per‑usage details plus API capabilities (structured objects, edit history, annotations, metrics), and explains deduplication, credit balance behavior, and monitoring via the Developer Console.\n\n5. **Twitter API Pricing 2026: Tiers, Costs \u0026 Alternatives - Xpoz** — https://www.xpoz.ai/blog/guides/understanding-twitter-api-pricing-tiers-and-alternatives/\n   - Independent analysis: breaks down the practical cost tiers (Free, Basic, Pro, Enterprise), highlights the steep gap between Basic ($100 / 10K reads) and Pro ($5,000 / 1M reads), and recommends third‑party alternatives (e.g., Xpoz) or hybrid approaches for mid‑volume read/use cases.\n\n## Overall takeaway \u0026 next steps\nOfficial X API billing is now credit‑based and usage‑driven with useful controls (auto‑recharge, spending limits, 24h deduplication) but also hard caps that can make mid‑range data retrieval expensive—many teams evaluate third‑party providers or hybrid setups to bridge the Basic→Pro gap. Next steps: check your Developer Console for exact per‑endpoint credit rates and current balances, enable spending limits/auto‑recharge if you’ll run production workloads, and if you expect \u003e10K but \u003c\u003c1M reads/month, research third‑party data providers (search “X API per‑endpoint pricing” and “Twitter API alternatives pricing comparison”) to compare costs and compliance implications."
  }
]
s5 slack success 2026-03-01 22:38:00 → 2026-03-01 22:38:00
Input (3151 bytes)
[
  {
    "text": "# Headline\nX (Twitter) now uses a credit-based, pay‑per‑usage API model: buy credits in the Developer Console, pay per endpoint/use, monitor usage in real time, and contact Enterprise for high-volume needs — note a 2M Post‑reads monthly cap and deduplication rules that can affect billing.\n\n## Most relevant sources\n1. **Pricing - X - X Developer Platform** — https://docs.x.com/x-api/getting-started/pricing\n   - Official pricing doc: describes a credit‑based, per‑endpoint pay‑per‑usage model (buy credits in the Developer Console), 24‑hour deduplication (same resource charged once per UTC day), auto‑recharge and spending limits, rewards of up to 20% xAI credits, and a monthly cap of 2 million Post reads (Enterprise for higher volume).\n\n2. **Programmatic access to X's posts, users, spaces, and more - X API** — https://docs.x.com/x-api/introduction\n   - Platform overview: confirms pay‑per‑usage pricing, recommends X API v2 for new projects, and summarizes capabilities (search, filtered stream, full‑archive search, fields/expansions and metrics) and quick‑start steps for Developer Console credentials.\n\n3. **Announcing the Launch of X API Pay‑Per‑Use Pricing** — https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476\n   - Official announcement/transition note: signals the rollout of pay‑per‑use pricing and indicates legacy free users may be transitioned (snippet references a one‑time $10 voucher for legacy free tier users being moved to pay‑per‑use).\n\n4. **About the X API - X Developer Platform** — https://docs.x.com/x-api/getting-started/about-x-api\n   - Feature and billing summary: reiterates pay‑per‑usage details plus API capabilities (structured objects, edit history, annotations, metrics), and explains deduplication, credit balance behavior, and monitoring via the Developer Console.\n\n5. **Twitter API Pricing 2026: Tiers, Costs \u0026 Alternatives - Xpoz** — https://www.xpoz.ai/blog/guides/understanding-twitter-api-pricing-tiers-and-alternatives/\n   - Independent analysis: breaks down the practical cost tiers (Free, Basic, Pro, Enterprise), highlights the steep gap between Basic ($100 / 10K reads) and Pro ($5,000 / 1M reads), and recommends third‑party alternatives (e.g., Xpoz) or hybrid approaches for mid‑volume read/use cases.\n\n## Overall takeaway \u0026 next steps\nOfficial X API billing is now credit‑based and usage‑driven with useful controls (auto‑recharge, spending limits, 24h deduplication) but also hard caps that can make mid‑range data retrieval expensive—many teams evaluate third‑party providers or hybrid setups to bridge the Basic→Pro gap. Next steps: check your Developer Console for exact per‑endpoint credit rates and current balances, enable spending limits/auto‑recharge if you’ll run production workloads, and if you expect \u003e10K but \u003c\u003c1M reads/month, research third‑party data providers (search “X API per‑endpoint pricing” and “Twitter API alternatives pricing comparison”) to compare costs and compliance implications."
  }
]
Output (3161 bytes)
[
  {
    "ok": true,
    "text": "# Headline\nX (Twitter) now uses a credit-based, pay‑per‑usage API model: buy credits in the Developer Console, pay per endpoint/use, monitor usage in real time, and contact Enterprise for high-volume needs — note a 2M Post‑reads monthly cap and deduplication rules that can affect billing.\n\n## Most relevant sources\n1. **Pricing - X - X Developer Platform** — https://docs.x.com/x-api/getting-started/pricing\n   - Official pricing doc: describes a credit‑based, per‑endpoint pay‑per‑usage model (buy credits in the Developer Console), 24‑hour deduplication (same resource charged once per UTC day), auto‑recharge and spending limits, rewards of up to 20% xAI credits, and a monthly cap of 2 million Post reads (Enterprise for higher volume).\n\n2. **Programmatic access to X's posts, users, spaces, and more - X API** — https://docs.x.com/x-api/introduction\n   - Platform overview: confirms pay‑per‑usage pricing, recommends X API v2 for new projects, and summarizes capabilities (search, filtered stream, full‑archive search, fields/expansions and metrics) and quick‑start steps for Developer Console credentials.\n\n3. **Announcing the Launch of X API Pay‑Per‑Use Pricing** — https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476\n   - Official announcement/transition note: signals the rollout of pay‑per‑use pricing and indicates legacy free users may be transitioned (snippet references a one‑time $10 voucher for legacy free tier users being moved to pay‑per‑use).\n\n4. **About the X API - X Developer Platform** — https://docs.x.com/x-api/getting-started/about-x-api\n   - Feature and billing summary: reiterates pay‑per‑usage details plus API capabilities (structured objects, edit history, annotations, metrics), and explains deduplication, credit balance behavior, and monitoring via the Developer Console.\n\n5. **Twitter API Pricing 2026: Tiers, Costs \u0026 Alternatives - Xpoz** — https://www.xpoz.ai/blog/guides/understanding-twitter-api-pricing-tiers-and-alternatives/\n   - Independent analysis: breaks down the practical cost tiers (Free, Basic, Pro, Enterprise), highlights the steep gap between Basic ($100 / 10K reads) and Pro ($5,000 / 1M reads), and recommends third‑party alternatives (e.g., Xpoz) or hybrid approaches for mid‑volume read/use cases.\n\n## Overall takeaway \u0026 next steps\nOfficial X API billing is now credit‑based and usage‑driven with useful controls (auto‑recharge, spending limits, 24h deduplication) but also hard caps that can make mid‑range data retrieval expensive—many teams evaluate third‑party providers or hybrid setups to bridge the Basic→Pro gap. Next steps: check your Developer Console for exact per‑endpoint credit rates and current balances, enable spending limits/auto‑recharge if you’ll run production workloads, and if you expect \u003e10K but \u003c\u003c1M reads/month, research third‑party data providers (search “X API per‑endpoint pricing” and “Twitter API alternatives pricing comparison”) to compare costs and compliance implications."
  }
]