← research-and-report

Run Detail

875f13e6-d081-4f7c-84be-044e6a3354a8
success

Started

2026-03-01 22:28:24

Finished

2026-03-01 22:28:56

Steps

s3 fetch_content success 2026-03-01 22:28:24 → 2026-03-01 22:28:35
Input (3800 bytes)
[
  {
    "link": "https://github.com/huginn/huginn",
    "snippet": "Huginn is a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf.",
    "title": "huginn/huginn: Create agents that monitor and act on your ..."
  },
  {
    "link": "https://elest.io/open-source/huginn/resources/quickstart",
    "snippet": "Agents can perform a wide range of tasks, such as fetching data from APIs, monitoring websites, sending notifications, and more. Each agent is configured with ...",
    "title": "Huginn - Quickstart | Elest.io"
  },
  {
    "link": "https://marks.kitchen/blog/huginn/",
    "snippet": "I've been playing around a lot with Huginn, which is a service that allows you to run “agents” for automation. It is similar to IFTTT.",
    "title": "An Introduction to Huginn - Mark's Kitchen"
  },
  {
    "link": "https://dev.to/heroku/huginn-an-open-source-self-hosted-ifttt-5hd6",
    "snippet": "Each Agent performs a specific function, such as sending an email or requesting a website. Agents generate and consume JSON payloads called ...",
    "title": "Huginn: An Open-Source, Self-Hosted IFTTT"
  },
  {
    "link": "https://medium.com/@VirtualAdept/huginn-writing-a-simple-agent-network-97c63c492334",
    "snippet": "This agent network will run every half hour, poll a REST API endpoint, and e-mail you what it gets. You'll have to have an already running Huginn instance.",
    "title": "Huginn: Writing a simple agent network"
  },
  {
    "link": "https://github.com/huginn",
    "snippet": "Huginn. Create agents that monitor and act on your behalf. Your agents are standing by!",
    "title": "Huginn - Create agents that monitor and act on your behalf"
  },
  {
    "link": "https://productivity.directory/huginn",
    "snippet": "Huginn is a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf.",
    "title": "Huginn Review 2025 - Features, Pricing, Hacks and Tips"
  },
  {
    "link": "https://www.reddit.com/r/selfhosted/comments/fmky18/huginn_agent_mageathread/",
    "snippet": "It allows you to create \"agents\" which are like little bots that do tasks for you. Each agent is sort of like a \"function\" in programming.",
    "title": "Huginn Agent Mageathread! : r/selfhosted - Reddit"
  },
  {
    "link": "https://haystack.deepset.ai/blog/query-decomposition",
    "snippet": "This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach.",
    "title": "Advanced RAG: Query Decomposition \u0026 Reasoning - Haystack"
  },
  {
    "link": "https://www.jetbrains.com/help/youtrack/cloud/search-and-command-attributes.html",
    "snippet": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and ...",
    "title": "Search Query Reference | YouTrack Cloud Documentation - JetBrains"
  },
  {
    "link": "https://dev.to/j12y/query-github-repo-topics-using-graphql-35ha",
    "snippet": "Creating a customized user profile page for GitHub to showcase work projects and make navigation to relevant topics easier.",
    "title": "Query GitHub Repo Topics Using GraphQL - DEV Community"
  },
  {
    "link": "https://www.sprinklr.com/help/articles/faqs-and-advanced-usecases/create-an-advanced-topic-query/646331628ea3c9635cf36711",
    "snippet": "Advanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more. By ...",
    "title": "‎Create an Advanced Topic Query | Sprinklr Help Center"
  },
  {
    "link": "https://learn.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language",
    "snippet": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL).",
    "title": "Understanding the Azure Resource Graph query language - Microsoft"
  }
]
Output (117397 bytes)
[
  {
    "content_readable": "What is Huginn?\n\nHuginn is a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf. Huginn's Agents create and consume events, propagating them along a directed graph. Think of it as a hackable version of IFTTT or Zapier on your own server. You always know who has your data. You do.\n\nHere are some of the things that you can do with Huginn:\n\nTrack the weather and get an email when it's going to rain (or snow) tomorrow (\"Don't forget your umbrella!\")\nList terms that you care about and receive email when their occurrence on Twitter changes. (For example, want to know when something interesting has happened in the world of Machine Learning? Huginn will watch the term \"machine learning\" on Twitter and tell you when there is a spike in discussion.)\nWatch for air travel or shopping deals\nFollow your project names on Twitter and get updates when people mention them\nScrape websites and receive email when they change\nConnect to Adioso, HipChat, FTP, IMAP, Jabber, JIRA, MQTT, nextbus, Pushbullet, Pushover, RSS, Bash, Slack, StubHub, translation APIs, Twilio, Twitter, and Weibo, to name a few.\nSend digest email with things that you care about at specific times during the day\nTrack counts of high frequency events and send an SMS within moments when they spike, such as the term \"san francisco emergency\"\nSend and receive WebHooks\nRun custom JavaScript or CoffeeScript functions\nTrack your location over time\nCreate Amazon Mechanical Turk workflows as the inputs, or outputs, of agents (the Amazon Turk Agent is called the \"HumanTaskAgent\"). For example: \"Once a day, ask 5 people for a funny cat photo; send the results to 5 more people to be rated; send the top-rated photo to 5 people for a funny caption; send to 5 final people to rate for funniest caption; finally, post the best captioned photo on my blog.\"\n\nJoin us in our Gitter room to discuss the project.\n\nJoin us!\n\nWant to help with Huginn? All contributions are encouraged! You could make UI improvements, add new Agents, write documentation and tutorials, or try tackling issues tagged with #\"help wanted\". Please fork, add specs, and send pull requests!\n\nHave an awesome idea but not feeling quite up to contributing yet? Head over to our Official 'suggest an agent' thread and tell us!\n\nExamples\n\nPlease checkout the Huginn Introductory Screencast!\n\nAnd now, some example screenshots. Below them are instructions to get you started.\n\nGetting Started\n\nDocker\n\nThe quickest and easiest way to check out Huginn is to use the official Docker image. Have a look at the documentation.\n\nLocal Installation\n\nIf you just want to play around, you can simply fork this repository, then perform the following steps:\n\nRun git remote add upstream https://github.com/huginn/huginn.git to add the main repository as a remote for your fork.\nCopy .env.example to .env (cp .env.example .env) and edit .env, at least updating the APP_SECRET_TOKEN variable.\nMake sure that you have MySQL or PostgreSQL installed. (On a Mac, the easiest way is with Homebrew. If you're going to use PostgreSQL, you'll need to prepend all commands below with DATABASE_ADAPTER=postgresql.)\nRun bundle to install dependencies\nRun bundle exec rake db:create, bundle exec rake db:migrate, and then bundle exec rake db:seed to create a development database with some example Agents.\nRun bundle exec foreman start, visit http://localhost:3000/, and login with the username of admin and the password of password.\nSetup some Agents!\nRead the wiki for usage examples and to get started making new Agents.\nPeriodically run git fetch upstream and then git checkout master \u0026\u0026 git merge upstream/master to merge in the newest version of Huginn.\n\nNote: By default, email messages are intercepted in the development Rails environment, which is what you just setup. You can view them at http://localhost:3000/letter_opener. If you'd like to send real email via SMTP when playing with Huginn locally, set SEND_EMAIL_IN_DEVELOPMENT to true in your .env file.\n\nIf you need more detailed instructions, see the Novice setup guide.\n\nDevelop\n\nAll agents have specs! And there's also acceptance tests that simulate running Huginn in a headless browser.\n\nInstall PhantomJS 2.1.1 or greater:\n\nUsing Node Package Manager: npm install phantomjs\nUsing Homebrew on OSX brew install phantomjs\nRun all specs with bundle exec rspec\nRun a specific spec with bundle exec rspec path/to/specific/test_spec.rb.\nRead more about rspec for rails here.\n\nUsing Huginn Agent gems\n\nHuginn Agents can now be written as external gems and be added to your Huginn installation with the ADDITIONAL_GEMS environment variable. See the Additional Agent gems section of .env.example for more information.\n\nIf you'd like to write your own Huginn Agent Gem, please see huginn_agent.\n\nOur general intention is to encourage complex and specific Agents to be written as Gems, while continuing to add new general-purpose Agents to the core Huginn repository.\n\nDeployment\n\nPlease see the Huginn Wiki for detailed deployment strategies for different providers.\n\nHeroku\n\nTry Huginn on Heroku: (Takes a few minutes to setup. Read the documentation while you are waiting and be sure to click 'View it' after launch!) Huginn launches only on a paid subscription plan for Heroku. For non-experimental use, we strongly recommend Heroku's 1GB paid plan or our Docker container.\n\nOpenShift\n\nOpenShift Online\n\nTry Huginn on OpenShift Online\n\nCreate a new app with either mysql or postgres:\n\noc new-app -f https://raw.githubusercontent.com/huginn/huginn/master/openshift/templates/huginn-mysql.json\n\nor\n\noc new-app -f https://raw.githubusercontent.com/huginn/huginn/master/openshift/templates/huginn-postgresql.json\n\nNote: You can also use the web console to import either json file by going to \"Add to Project\" -\u003e \"Import YAML/JSON\".\n\nIf you are on the Starter plan, make sure to follow the guide to remove any existing application.\n\nThe templates should work on a v3 installation or the current v4 online.\n\nManual installation on any server\n\nHave a look at the installation guide.\n\nOptional Setup\n\nSetup for private development\n\nSee private development instructions on the wiki.\n\nEnable the WeatherAgent\n\nIn order to use the WeatherAgent you need an Weather Data API key from Pirate Weather. Sign up for one and then change the value of api_key: your-key in your seeded WeatherAgent.\n\nDisable SSL\n\nWe assume your deployment will run over SSL. This is a very good idea! However, if you wish to turn this off, you'll probably need to edit config/initializers/devise.rb and modify the line containing config.rememberable_options = { :secure =\u003e true }. You will also need to edit config/environments/production.rb and modify the value of config.force_ssl.\n\nLicense\n\nHuginn is provided under the MIT License.\n\nHuginn was originally created by @cantino in 2013. Since then, many people's dedicated contributions have made it what it is today.\n\n",
    "link": "https://github.com/huginn/huginn",
    "snippet": "Huginn is a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf.",
    "title": "huginn/huginn: Create agents that monitor and act on your ..."
  },
  {
    "content_readable": "Huginn is an open source web automation tool. It enables users to create agents, which function like programs, for tasks such as website monitoring, data retrieval, and online service interaction. These agents can be configured to respond to web changes or trigger specific actions, providing a solution for automating tasks and staying informed about online activities\n\nLogin\n\nOn your first visit to the site, you will be presented with the login/signup screen.\n\nWhen your instance is first created, an account is created for you with the email you chose. You can get the password for this account by going to your Elestio dashboard and clicking on the \"Show Password\" button.\n\nEnter your email, name and password and click the \"Login\" button\n\nCreating New Agent\n\nAgent is a fundamental building block that performs a specific task or action. It can be thought of as a software component that carries out automated actions based on predefined rules and triggers. Agents can perform a wide range of tasks, such as fetching data from APIs, monitoring websites, sending notifications, and more. Each agent is configured with its own set of options and parameters to define its behavior. You can create agent by clicking on the \"New Agent\" button.\n\nCreating New Scenario\n\nScenario is a sequence of events and agents that work together to perform a specific task or automate a workflow. It represents a set of actions and conditions that are executed in a predefined order. Scenarios in Huginn allow you to define complex workflows by connecting agents and specifying the flow of data between them. Each scenario can have multiple agents and events, and they can be triggered by various conditions or time intervals. You can create new scenario by clicking on the \"New Scenario\" button.\n\nCreating New Credential\n\nCredentials are used to securely store and manage sensitive information, such as API keys, passwords, and access tokens. Credentials can be created and associated with agents to provide them with the necessary authentication details to interact with external services or APIs. This allows agents to securely access and retrieve data from various sources without exposing sensitive information in the agent configuration. You can create new credentials by clicking on the \"New Credential\" button.\n\nEvents\n\nEvents are a key concept used to trigger actions and automate workflows. They represent specific occurrences or conditions that can initiate the execution of agents within a scenario. Events can be based on various triggers, such as receiving an HTTP request, a specific time interval, changes in data, or external API calls. When an event is triggered, it can pass data to the connected agents, allowing them to perform actions based on the event's context. You can check events in the \"Events\" section.\n\nBackground Jobs\n\nJobs are background tasks that are executed asynchronously. They can be used to perform long-running or resource-intensive operations without blocking the main execution flow. Jobs can be scheduled to run at specific intervals or triggered by events. They are commonly used for tasks such as data processing, API calls, and sending notifications. The status and details of jobs can be monitored in the \"Jobs\" section of the Huginn user interface.\n\nCreating New User\n\nUser are individuals who has registered an account and has access to the Huginn user interface. Users can log in to Huginn, create and manage agents, scenarios, credentials, events, and jobs. They can configure and customize their Huginn instance according to their specific needs. Users can also monitor the status and details of their agents, scenarios, and jobs through the user interface. You can create new user by clicking on the \"New User\" button.\n\n",
    "link": "https://elest.io/open-source/huginn/resources/quickstart",
    "snippet": "Agents can perform a wide range of tasks, such as fetching data from APIs, monitoring websites, sending notifications, and more. Each agent is configured with ...",
    "title": "Huginn - Quickstart | Elest.io"
  },
  {
    "content_readable": "I’ve been playing around a lot with Huginn, which is a service that allows you to run “agents” for automation. It is similar to IFTTT.\n\nA lot of people see Huginn, and think it’s cool, but don’t know what to do with it. I didn’t really either when I first heard of the project a few years ago. Hopefully you can get some ideas from this blog.\n\nI have previously written about my “On This Day” software, which was a webserver I handcrafted. It ran a bunch of scripts to pull in data from various sources, and would create a daily digest of the items. Since I put this together fairly quickly several years ago, it runs usually without issue. But it’s annoying to extend, and annoying to debug. I wanted to try to migrate the functionaility to huginn.\n\nHuginn has a bit of a learning curve. At first, I was having a hard time understanding the agent configuration options, and I couldn’t even figure out how to use the HTTP request agent (called the “Website Agent”). With some light reading of the source code, I figured everything out. Currently, my “On This Day” scenario features 14 agents.\n\n7 “source” agents. Most of these are the “Website agents” which scrape some data from the web, and parse the information I want from it. Several of these are for comics, and they simply output the image source for a comic image. One of the agents is a JavaScript agent, which runs a script I wrote that generates a different result based on the date.\n\n5 formatting agents, which take in random JSON input, and normalize it. Each source has a different input, but the outputs all just have a “message” and a “type”.\n\n1 digest agent, which takes in the normalized JSON, and combines it into one HTML template. This listens for events over the course of a day, and then when scheduled, it outputs the result of the templating.\n\n1 “data output” agent, which basically just means an RSS feed output. For every new input, this adds an item to the feed.\n\nSo far, this solution works really well. It’s really easy to add new things. A few of the odd data sources from “On This Day” use JavaScript to fill in functionality gaps. For my journal data entry, which gets entries from historic journals, I ended up making my own API service. I finished most of this API in an evening, and adding it into huginn was really simple.\n\nFor fun, I also added a “reblog” feed. I connected an Webhook agent to my RSS reader, Miniflux. I can “save” stories in Miniflux, and their links will get added to a daily list of “reblogged” items. I used a manual agent as well so that I can add links that don’t originate from Miniflux (i.e. if I just find something elsewhere on the web). You can follow this feed here.\n\nI also have been using huginn for my personal tracking stuff. I previously had a bunch of cronjobs running on a raspberry pi. It collected data from my Airgradient arduino, and used a weather API. Some of this I’ve moved fully to Huginn, which will massively simplify all of the configuration.\n\nOne benefit of having all of this is huginn is that it is much easier to set it all up again on a new server. As much as possible, everything is together in one place, rather than spread around in cronjobs on various nodes. I’ve been putting everything in Docker and exposing it via Traefik. It’s very simple to set this all up.\n\nOne downside of Huginn is that it’s been hogging a lot of memory on my VPS. This isn’t a huge deal, as I was already using a very minimal server. Additionally, as I mentioned before, it does take some time to figure out everything. I’m still not 100% sure of all of the settings, but functionally I was able to get a lot out with minimal tinkering. I also was dissapointed in the Docker setup, which doesn’t have as much documentation for as I expected.\n\nOverall, I really enjoy Huginn! It’s taken a lot of the scripts I’ve written over the last few years, and simplies their deployment and configuration. It’s so much easier to update them, and I can do things I never before attempted.",
    "link": "https://marks.kitchen/blog/huginn/",
    "snippet": "I've been playing around a lot with Huginn, which is a service that allows you to run “agents” for automation. It is similar to IFTTT.",
    "title": "An Introduction to Huginn - Mark's Kitchen"
  },
  {
    "content_readable": "As developers, we don’t have the time or patience for routine tasks. We like to get things done, and any tools that can help us automate are high on our radar.\n\nEnter Huginn, a workflow automation server similar to Zapier or IFTTT, but open source. With Huginn you can automate tasks such as watching for air travel deals, continually watching for certain topics on Twitter, or scanning for sensitive data in your code.\n\nRecently a post about Huginn hit the top of Hacker News. This piqued my interest, so I wanted to see why it's so popular, what it's all about, and what it's being used for.\n\nHow Huginn Started\n\nI reached out to Huginn's creator, Andrew Cantino, to ask him why he started it.\n\n\"I started the project in 2013 to scratch my own itch—I wanted to scrape some websites to know when they changed (web comics, movie trailers, local weather forecasts, Craigslist sales, eBay, etc.) and I wanted to be able to automate simple reactions to those changes. I'd been interested in personal automation for a while and Huginn was initially a quick project I built over the Christmas holidays that year.\"\n\nHowever, that simple Christmas-holiday project quickly grew.\n\nToday, Huginn is a community-driven project with hundreds of contributors and thousands of users. Andrew still uses Huginn for its original use case:\n\n\"I still primarily use Huginn for this purpose: it tells me about upcoming yard sales, if I should bring an umbrella today because of rain in the forecast, when rarely-updated blogs have changed, when certain words spike on Twitter, etc. I also have found it very useful for sourcing information for the weekly newsletter that I write about the space industry, called The Orbital Index.\"\n\nHowever, the community has found a wider range of uses. So let's look at exactly what Huginn is, how to set it up, and how to use it to automate your everyday life.\n\nHow Huginn Works\n\nHuginn is a web-based scheduling service that runs workers called Agents. Each Agent performs a specific function, such as sending an email or requesting a website. Agents generate and consume JSON payloads called events, which can be used to chain Agents together. Agents can be scheduled, or executed manually.\n\nGetting Started\n\nIt's easy to deploy Huginn with just one click using the Deploy to Heroku button. Huginn also supports Docker and Docker Compose, manual installation on Linux, and many other deployment methods. After installing, you can extend Huginn by using one of the many available Agent Gems, or by creating your own.\n\nOnce you've deployed Huginn and have logged in (check your specific setup for the URL), creating a new Agent is simple, as seen in this screen shot. This Agent follows a Twitter stream in real time.\n\nHere's an existing Agent that pulls the latest comic from xkcd.com. You can see the basic stats of the Agent (last checked, last created, and so on). The Options field shows how the Agent is configured, including the CSS selectors used to extract data from the page.\n\nScenarios\n\nYou can also organize Agents into Scenarios, which allows you to group similar Agents as well as import and export Agent configurations as JSON files. You can also fine-tune Agent scheduling and configuration using special Agents called Controllers. Here we see a Scenario build around the theme of \"Entertainment.\"\n\nDynamic Content\n\nLastly, Huginn uses the Liquid templating engine, which allows you to load dynamic content into Agents. This is commonly used to store configuration data (such as credentials) separately from Agents.\n\nHere, it's used to format the URL, title, and on-hover text from the XKCD Source Agent as HTML:\n\nWhy Would I Use Huginn?\n\nIn addition to web scraping, Huginn supports a wide variety of actions that can allow for some truly complex workflows. Disclaimer: Many sites disallow automated web scraping. Be sure to check the terms of service (TOS) of any website you intend to access using Huginn.\n\nSome of the examples from the GitHub page include:\n\nWatch for air travel or shopping deals\nFollow your project names on Twitter and get updates when people mention them Connect to Adioso, HipChat, Basecamp, Growl, FTP, IMAP, Jabber, JIRA, MQTT, nextbus, Pushbullet, Pushover, RSS, Bash, Slack, StubHub, translation APIs, Twilio, Twitter, Wunderground, and Weibo, to name a few.\nSend digest emails with things that you care about at specific times during the day\nTrack counts of high frequency events and send an SMS within moments when they spike\nSend and receive WebHooks\nRun custom JavaScript or CoffeeScript functions\nTrack your location over time\nCreate Amazon Mechanical Turk workflows as the inputs, or outputs, of agents (the Amazon Turk Agent is called the \"HumanTaskAgent\"). For example: \"Once a day, ask 5 people for a funny cat photo; send the results to 5 more people to be rated; send the top-rated photo to 5 people for a funny caption; send to 5 final people to rate for funniest caption; finally, post the best captioned photo on my blog.\"\n\nLet's look at a few of these use cases in detail.\n\nCurated Feeds\n\nUsing the Website Agent, you can fetch the latest contents of multiple web pages, filter and aggregate the results, then send the final contents to yourself as an email. The default Scenario demonstrates this by fetching the latest XKCD comic. This creates an event containing the comic title, URL, and on-hover text, which are rendered as HTML via an Event Formatting Agent. Another Website Agent simultaneously gets the latest movie trailers from iTunes, then both events are merged into an Email Digest Agent that fires each afternoon:\n\nMonitoring Social Networks\n\nHuginn supports several social networks including Twitter and Tumblr. These Agents can watch for new posts, trending topics, and updates from other users.\n\nLet’s say you live in a hurricane-prone area and want to follow the impact of a storm. Using a Twitter Stream Agent, you can watch for Tweets containing “hurricane,” “storm,” and so on, and pass the results to a Peak Detector Agent. This counts Tweets over a period of time, measures the standard deviation, and fires an event if it detects an outlier. You can have this event trigger an Email Agent that notifies you immediately. Andrew Cantino explains this use case in more detail on his blog.\n\nPrice Shopping\n\nHuginn makes an excellent online shopping tool. When shopping for the best deal, create Website Agents to run daily searches on discount and trading sites. Use Event Formatting Agents to extract prices, then use a Change Detector Agent to compare the last retrieved price to the current price. If it’s lower, you can extract the item URL and send it straight to your inbox.\n\nSecurity Alerts\n\nStaying on top of security updates is a continuous process. You can use Huginn to watch the National Vulnerability Database for CVEs affecting your systems and notify you immediately. If you want to filter the results (e.g. only show high-priority alerts), you can use a Trigger Agent to only allow results where the severity is above a certain value.\n\nAdvanced Use Cases\n\nHuginn comes with some powerful Agents that greatly extend its capabilities beyond web scraping.\n\nData Processing and Validation\n\nHuginn can read files stored on the host, making it a useful data processing tool. Let's say you're testing changes to a codebase, and before you commit, you want to scan for any sensitive data that you might have left in during testing. You can create a Local File Agent to scan your project directory, pass the contents to an Event Formatting Agent, and use regular expressions to detect credentials, passwords, and similar strings. Alternatively, you could use a Shell Command Agent to call a utility like repo-supervisor and fire a desktop notification when it detects matches.\n\nNewsroom Automation\n\nOne of Huginn’s first great successes was its adoption by the New York Times to automate newsroom tasks. During the 2014 Winter Olympics, Huginn monitored their data pipeline availability and sent notifications when medals were awarded. Huginn also notified reporters when new stories published and updated a Slack channel when content changed on nytimes.com. You can learn more about their use cases at Huginn for Newsrooms.\n\nConclusion\n\nHuginn is a deceptively simple tool with a lot of flexibility. The best way to see what it can do is to try it yourself. To learn more, visit https://github.com/huginn/huginn.",
    "link": "https://dev.to/heroku/huginn-an-open-source-self-hosted-ifttt-5hd6",
    "snippet": "Each Agent performs a specific function, such as sending an email or requesting a website. Agents generate and consume JSON payloads called ...",
    "title": "Huginn: An Open-Source, Self-Hosted IFTTT"
  },
  {
    "content_readable": "",
    "link": "https://medium.com/@VirtualAdept/huginn-writing-a-simple-agent-network-97c63c492334",
    "snippet": "This agent network will run every half hour, poll a REST API endpoint, and e-mail you what it gets. You'll have to have an already running Huginn instance.",
    "title": "Huginn: Writing a simple agent network"
  },
  {
    "content_readable": "Skip to content\n\nNavigation Menu\n\n{{ message }}\n\nOverview\nRepositories\nProjects\nPackages\nPeople\n\nPopular repositories Loading\n\nCreate agents that monitor and act on your behalf. Your agents are standing by!\n\nRuby 48.8k 4.2k\n\nBase for creating new Huginn Agents as Gems\n\nRuby 128 50\n\nTests for the Huginn docker images\n\nRuby 5 10\n\nRepositories\nType\nSelect type\n\nAll Public Sources Forks Archived Mirrors Templates\nLanguage\nSelect language\n\nAll Ruby\nSort\nSelect order\n\nLast updated Name Stars\n\nShowing 6 of 6 repositories\n\nhuginn Public\n\nCreate agents that monitor and act on your behalf. Your agents are standing by!\n\nhuginn/huginn’s past year of commit activity\n\nhuginn/omniauth-dropbox-oauth2’s past year of commit activity\n\nRuby 4 45 0 1\n\nUpdated Nov 17, 2024\n\nhuginn_agent Public\n\nBase for creating new Huginn Agents as Gems\n\nhuginn/huginn_agent’s past year of commit activity\n\nRuby 128\n\nMIT 50 3 2\n\nUpdated Oct 28, 2024\n\nhuginn/huginn_docker_specs’s past year of commit activity\n\nRuby 5 10 0 1\n\nUpdated Apr 12, 2023\n\nhuginn/delayed_job_active_record’s past year of commit activity\n\nRuby 1\n\nMIT 344 0 0\n\nUpdated Jan 15, 2023\n\nhuginn/tumblr_client’s past year of commit activity\n\nRuby 2\n\nApache-2.0 137 0 0\n\nUpdated Jul 21, 2020\n",
    "link": "https://github.com/huginn",
    "snippet": "Huginn. Create agents that monitor and act on your behalf. Your agents are standing by!",
    "title": "Huginn - Create agents that monitor and act on your behalf"
  },
  {
    "content_readable": "Huginn\n\nCreate agents that monitor and act on your behalf. Your agents are standing by!\n\nVisit Website Reviews\n\nWhat is Huginn?\n\nHuginn is a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf. Huginn's Agents create and consume events, propagating them along a directed graph. Think of it as a hackable version of IFTTT or Zapier on your own server. You always know who has your data. You do.\n\nHere are some of the things that you can do with Huginn:\n\nTrack the weather and get an email when it's going to rain (or snow) tomorrow (\"Don't forget your umbrella!\")\nList terms that you care about and receive email when their occurrence on Twitter changes. (For example, want to know when something interesting has happened in the world of Machine Learning? Huginn will watch the term \"machine learning\" on Twitter and tell you when there is a spike in discussion.)\nWatch for air travel or shopping deals\nFollow your project names on Twitter and get updates when people mention them\nScrape websites and receive email when they change\nConnect to Adioso, HipChat, FTP, IMAP, Jabber, JIRA, MQTT, nextbus, Pushbullet, Pushover, RSS, Bash, Slack, StubHub, translation APIs, Twilio, Twitter, and Weibo, to name a few.\nSend digest email with things that you care about at specific times during the day\nTrack counts of high frequency events and send an SMS within moments when they spike, such as the term \"san francisco emergency\"\nSend and receive WebHooks\nRun custom JavaScript or CoffeeScript functions\nTrack your location over time\nCreate Amazon Mechanical Turk workflows as the inputs, or outputs, of agents (the Amazon Turk Agent is called the \"HumanTaskAgent\"). For example: \"Once a day, ask 5 people for a funny cat photo; send the results to 5 more people to be rated; send the top-rated photo to 5 people for a funny caption; send to 5 final people to rate for funniest caption; finally, post the best captioned photo on my blog.\"\n\nHuginn Reviews\n\nHuginn doesn't have enough reviews yet!\n\nHuginn details\n\nFree\n\nCategories",
    "link": "https://productivity.directory/huginn",
    "snippet": "Huginn is a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf.",
    "title": "Huginn Review 2025 - Features, Pricing, Hacks and Tips"
  },
  {
    "content_readable": "whoa there, pardner!\n\nReddit's awesome and all, but you may have a bit of a problem. We've seen far too many requests come from your IP address recently.\n\nPlease wait a few minutes and try again.\n\nIf you're still getting this error after a few minutes and think that we've incorrectly blocked you or you would like to discuss easier ways to get the data you want, please contact us at this email address.\n\nYou can read Reddit's Terms of Service here.\n\nWhen contacting us, please include your Reddit account along with the following code:\n\n019cab4d-76d0-7f1a-81cd-1b1d8c350f87",
    "link": "https://www.reddit.com/r/selfhosted/comments/fmky18/huginn_agent_mageathread/",
    "snippet": "It allows you to create \"agents\" which are like little bots that do tasks for you. Each agent is sort of like a \"function\" in programming.",
    "title": "Huginn Agent Mageathread! : r/selfhosted - Reddit"
  },
  {
    "content_readable": "This is part one of the Advanced Use Cases series:\n\n1️⃣ Extract Metadata from Queries to Improve Retrieval\n\n2️⃣ Query Expansion\n\n3️⃣ Query Decomposition\n\n4️⃣ Automated Metadata Enrichment\n\nSometimes a single question is multiple questions in disguise. For example: “Did Microsoft or Google make more money last year?”. To get to the correct answer for this seemingly simple question, we actually have to break it down: “How much money did Google make last year?” and “How much money did Microsoft make last year?”. Only if we know the answer to these 2 questions can we reason about the final answer.\n\nThis is where query decomposition comes in. This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach:\n\nDecompose the original question into smaller questions that can be answered independently to each other. Let’s call these ‘sub questions’ here on out.\nReason about the final answer to the original question, based on each sub-answer.\n\nWhile for many query/dataset combinations, this may not be required, for some, it very well may be. At the end of the day, often one query results in one retrieval step. If within that one single retrieval step we are unable to have the retriever return both the money Microsoft made last year and Google, then the system will struggle to produce an accurate final response.\n\nThis method ensures that we are:\n\nretrieving the relevant context for each sub question.\nreasoning about the final answer given each answer based on the contexts retrieved for each sub question.\n\nIn this article, I’ll be going through some key steps that allow you to achieve this. You can find the full working example and code in the linked recipe from our cookbook. Here, I’ll only show the most relevant parts of the code.\n\n🚀 I’m sneaking something extra into this article. I saw the opportunity to try out the structured output functionality (currently in beta) by OpenAI to create this example. For this step, I extended the OpenAIGenerator in Haystack to be able to work with Pydantic schemas. More on this in the next step.\n\nLet’s try build a full pipeline that makes use of query decomposition and reasoning. We’ll use a dataset about Game of Thrones (a classic for Haystack) which you can find preprocessed and chunked on Tuana/game-of-thrones on Hugging Face Datasets.\n\nDefining our Questions Structure\n\nOur first step is to create a structure within which we can contain the subquestions, and each of their answers. This will be used by our OpenAIGenerator to produce a structured output.\n\nfrom pydantic import BaseModel\n\nclass Question(BaseModel):\n    question: str\n    answer: Optional[str] = None\n\nclass Questions(BaseModel):\n    questions: list[Question]\n\n\nThe structure is simple, we have Questions made up of a list of Question. Each Question has the question string as well as an optional answer to that question.\n\nDefining the Prompt for Query Decomposition\n\nNext up, we need to get an LLM to decompose a question and produce multiple questions. Here, we will start making use of our Questions schema.\n\nsplitter_prompt = \"\"\"\nYou are a helpful assistant that prepares queries that will be sent to a search component.\nSometimes, these queries are very complex.\nYour job is to simplify complex queries into multiple queries that can be answered\nin isolation to eachother.\n\nIf the query is simple, then keep it as it is.\nExamples\n1. Query: Did Microsoft or Google make more money last year?\n   Decomposed Questions: [Question(question='How much profit did Microsoft make last year?', answer=None), Question(question='How much profit did Google make last year?', answer=None)]\n2. Query: What is the capital of France?\n   Decomposed Questions: [Question(question='What is the capital of France?', answer=None)]\n3. Query: {{question}}\n   Decomposed Questions:\n\"\"\"\n\nbuilder = PromptBuilder(splitter_prompt)\nllm = OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions})\n\n\nAnswering Each Sub Question\n\nFirst, let’s build a pipeline that uses the splitter_prompt to decompose our question:\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question}})\nprint(result[\"llm\"][\"structured_reply\"])\n\n\nThis produces the following Questions (List[Question])\n\nquestions=[Question(question='How many siblings does Jamie have?', answer=None), \n           Question(question='How many siblings does Sansa have?', answer=None)]\n\n\nNow, we have to fill in the answer fields. For this step, we need to have a separate prompt and two custom components:\n\nThe CohereMultiTextEmbedder which can take multiple questions rather than a single one like the CohereTextEmbedder.\nThe MultiQueryInMemoryEmbeddingRetriever which can again, take multiple questions and their embeddings, returning question_context_pairs. Each pair contains the question and documents that are relevant to that question.\n\nNext, we need to construct a prompt that can instruct a model to answer each subquestion:\n\nmulti_query_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nAnd you have split the task into the following questions:\n{% for pair in question_context_pairs %}\n  {{pair.question}}\n{% endfor %}\n\nHere are the question and context pairs for each question.\nFor each question, generate the question answer pair as a structured output\n{% for pair in question_context_pairs %}\n  Question: {{pair.question}}\n  Context: {{pair.documents}}\n{% endfor %}\nAnswers:\n\"\"\"\n\nmulti_query_prompt = PromptBuilder(multi_query_template)\n\n\nLet’s build a pipeline that can answer each individual sub question. We will call this the query_decomposition_pipeline :\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\nquery_decomposition_pipeline.add_component(\"embedder\", CohereMultiTextEmbedder(model=\"embed-multilingual-v3.0\"))\nquery_decomposition_pipeline.add_component(\"multi_query_retriever\", MultiQueryInMemoryEmbeddingRetriever(InMemoryEmbeddingRetriever(document_store=document_store)))\nquery_decomposition_pipeline.add_component(\"multi_query_prompt\", PromptBuilder(multi_query_template))\nquery_decomposition_pipeline.add_component(\"query_resolver_llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"embedder.embeddings\", \"multi_query_retriever.query_embeddings\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"multi_query_retriever.queries\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"multi_query_retriever.question_context_pairs\", \"multi_query_prompt.question_context_pairs\")\nquery_decomposition_pipeline.connect(\"multi_query_prompt\", \"query_resolver_llm\")\n\n\nRunning this pipeline with the original question “Who has more siblings, Jamie or Sansa?”, results in the following structured output:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question}})\n\nprint(result[\"query_resolver_llm\"][\"structured_reply\"])\n\n\nquestions=[Question(question='How many siblings does Jamie have?', answer='2 (Cersei Lannister, Tyrion Lannister)'),\n           Question(question='How many siblings does Sansa have?', answer='5 (Robb Stark, Arya Stark, Bran Stark, Rickon Stark, Jon Snow)')]\n\n\nReasoning About the Final Answer\n\nThe final step we have to take is to reason about the ultimate answer to the original question. Again, we create a prompt that will instruct an LLM to do this. Given we have the questions output that contains each sub question and answer, we will make these inputs to this final prompt.\n\nreasoning_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nYou have split this question up into simpler questions that can be answered in\nisolation.\nHere are the questions and answers that you've generated\n{% for pair in question_answer_pair %}\n  {{pair}}\n{% endfor %}\n\nReason about the final answer to the original query based on these questions and\naswers\nFinal Answer:\n\"\"\"\n\nresoning_prompt = PromptBuilder(reasoning_template)\n\n\nTo be able to augment this prompt with the question answer pairs, we will have to extend our previous pipeline and connect the structured_reply from the previous LLM, to the question_answer_pair input of this prompt.\n\nquery_decomposition_pipeline.add_component(\"reasoning_prompt\", PromptBuilder(reasoning_template))\nquery_decomposition_pipeline.add_component(\"reasoning_llm\", OpenAIGenerator(model=\"gpt-4o-mini\"))\n\nquery_decomposition_pipeline.connect(\"query_resolver_llm.structured_reply\", \"reasoning_prompt.question_answer_pair\")\nquery_decomposition_pipeline.connect(\"reasoning_prompt\", \"reasoning_llm\")\n\n\nNow, let’s run this final pipeline and see what results we get:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question},\n                                           \"reasoning_prompt\": {\"question\": question}},\n                                           include_outputs_from=[\"query_resolver_llm\"])\n\nprint(\"The original query was split and resolved:\\n\")\n\nfor pair in result[\"query_resolver_llm\"][\"structured_reply\"].questions:\n  print(pair)\nprint(\"\\nSo the original query is answered as follows:\\n\")\nprint(result[\"reasoning_llm\"][\"replies\"][0])\n\n\n🥁 Drum roll please:\n\nThe original query was split and resolved:\n\nquestion='How many siblings does Jaime have?' answer='Jaime has one sister (Cersei) and one younger brother (Tyrion), making a total of 2 siblings.'\nquestion='How many siblings does Sansa have?' answer='Sansa has five siblings: one older brother (Robb), one younger sister (Arya), and two younger brothers (Bran and Rickon), as well as one older illegitimate half-brother (Jon Snow).'\n\nSo the original query is answered as follows:\n\nTo determine who has more siblings between Jaime and Sansa, we need to compare the number of siblings each has based on the provided answers.\n\nFrom the answers:\n- Jaime has 2 siblings (Cersei and Tyrion).\n- Sansa has 5 siblings (Robb, Arya, Bran, Rickon, and Jon Snow).\n\nSince Sansa has 5 siblings and Jaime has 2 siblings, we can conclude that Sansa has more siblings than Jaime.\n\nFinal Answer: Sansa has more siblings than Jaime.\n\n\nWrapping up\n\nGiven the right instructions, LLMs are good at breaking down tasks. Query decomposition is a great way we can make sure we do that for questions that are multiple questions in disguise.\n\nIn this article, you learned how to implement this technique with a twist 🙂 Let us know what you think about using structured outputs for these sorts of use cases. And check out the Haystack experimental repo to see what new features we’re working on.",
    "link": "https://haystack.deepset.ai/blog/query-decomposition",
    "snippet": "This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach.",
    "title": "Advanced RAG: Query Decomposition \u0026 Reasoning - Haystack"
  },
  {
    "content_readable": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and relative date parameters that are recognized in search queries.\n\nSeveral references on this page are not available in Simple Search. Switch to Advanced Search to access them.\n\nIssue Attributes\n\nEvery issue has base attributes that are set automatically by YouTrack. These include the issue ID, the user who created or applied the last update to the issue, and so on.\n\nThese search attributes represent an \u003cAttribute\u003e in the Search Query Grammar. Their values correspond to the \u003cValue\u003e or \u003cValueRange\u003e parameter.\n\nAttribute-based search uses the syntax attribute: value.\n\nYou can specify multiple values for the target attribute, separated by commas.\n\nExclude specific values from the search results with the syntax attribute: -value.\n\nIn many cases, you can omit the attribute and reference values directly with the # or - symbols. For additional guidelines, see Advanced Search.\n\nattachment text\n\nattachment text: \u003ctext\u003e\n\nReturns issues that include image attachments with the specified text.\n\nattachments\n\nattachments: \u003ctext\u003e\n\nReturns issues that include attachments with the specified filename.\n\nBoard\n\nBoard \u003cboard name\u003e: \u003csprint name\u003e\n\nReturns issues that are assigned to the specified sprint on the specified agile board. To find issues that are assigned to agile boards with sprints disabled, use has: \u003cboard name\u003e.\n\ncc recipients\n\ncc recipients: \u003cuser\u003e\n\nReturns tickets where the specified users are added as CCs.\n\ncode\n\ncode: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words that are formatted as code in the issue description or comments. This includes matches that are formatted as inline code spans, indented and fenced code blocks, and stack traces.\n\ncommented: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues to which comments were added on the specified date or within the specified period.\n\ncommenter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were commented by the specified user or by a member of the specified group.\n\ncomments: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in a comment.\n\ncreated\n\ncreated: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that were created on a specific date or within a specified time frame.\n\ndescription\n\ndescription: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in the issue description.\n\ndocument type\n\ndocument type: Issue | Ticket\n\nReturns either issue or ticket type documents.\n\nGantt\n\nGantt: \u003cchart name\u003e\n\nReturns issues that are assigned to the specified Gantt chart.\n\nhas\n\nhas: \u003cattribute\u003e\n\nThe has keyword functions as a Boolean search term. When used in a search query, it returns all issues that contain a value for the specified attribute. Use the minus operator (-) before the specified attribute to find issues that have empty values.\n\nFor example, to find all issues in the TST project that are assigned to the current user, have a duplicates link, have attachments, but do not have any comments, enter in: TST for: me has: duplicates , attachments , -comments.\n\nYou can use the has keyword in combination with the following attributes:\n\nAttribute\n\nDescription\n\nattachments\n\nReturns issues that have attachments.\n\nboards\n\nReturns issues that are assigned to at least one agile board. When used with an exclusion operator (-), returns issues that aren't assigned to any boards.\n\nBoard \u003cboard name\u003e\n\nReturns issues that are assigned to the specified agile board.\n\ncomments\n\nReturns issues that have one or more comments.\n\ndescription\n\nReturns issues that do not have an empty description.\n\n\u003cfield name\u003e\n\nReturns issues that contain any value in the specified custom field. Enclose field names that contain spaces in braces.\n\nGantt\n\nReturns issues that are assigned to any Gantt chart.\n\n\u003clink type name\u003e\n\nReturns issues that have links that match the specified outward name or inward name. Enclose link names that contain spaces in braces.\n\nFor example, to find issues that are linked as subtasks to parent issues, use:\n\nhas: {Subtask of}\n\nTo find issues that aren't linked to a parent issue, use:\n\nhas: -{Subtask of}\n\nlinks\n\nReturns issues that have any issue link type.\n\nstar\n\nReturns issues that have the star tag for the current user.\n\nunderestimation\n\nReturns issues where the total spent time is greater than the original estimation value.\n\nvcs changes\n\nReturns issues that contain vcs changes.\n\nvotes\n\nReturns issues that have one or more votes.\n\nwork\n\nReturns issues that have one or more work items.\n\nissue ID\n\nissue ID: \u003cissue ID\u003e, #\u003cissue ID\u003e\n\nReturns an issue that matches the specified issue ID. This attribute can also be referenced as a single value with the syntax #\u003cissue ID\u003e or -\u003cissue ID\u003e. When the search returns a single issue, the result is displayed in single issue view.\n\nIf you don't use the syntax for an attribute-based search (issue ID: \u003cvalue\u003e or #\u003cvalue\u003e), the input is also parsed as a text search. In addition to any issue that matches the specified issue ID, the search results include any issue that contains the specified ID in any text attribute.\n\nIf you set the issue ID in quotes, the input is only parsed as a text search. The search results only include issues that contain the specified ID in a text attribute.\n\nNote that even when an issue ID is parsed as a text search, the results do not include issue links. To find issues based on issue links, use the links attribute or reference a specific link type.\n\nlinks\n\nlinks: \u003cissue ID\u003e\n\nReturns all issues that contain links to the specified issue.\n\nlooks like\n\nlooks like: \u003cissue ID\u003e\n\nReturns issues in which the issue summary or description contains words that are found in the issue summary or description in the specified issue. Issues that contain matching words in the issue summary are given higher weight when the search results are sorted by relevance.\n\nmentioned in\n\nmentioned in: \u003cissue id\u003e\n\nReturns issues with issue IDs referenced in the description or a comment of the target issue. Issue IDs in supplemental text fields aren't included in the search results.\n\nmentions\n\nmentions: \u003cissue id\u003e, \u003cuser\u003e\n\nReturns issues that contain either @mention for the specified user or issue IDs referenced in the description or a comment. User mentions and issue IDs in supplemental text fields aren't included in the search results.\n\norganization\n\norganization: \u003corganization name\u003e\n\nReturns issues that belong to the specified organization. This attribute can also be referenced as a single value.\n\nproject\n\nproject: \u003cproject name\u003e | \u003cproject ID\u003e\n\nReturns issues that belong to the specified project. This attribute can also be referenced as a single value.\n\nreporter\n\nreporter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues and tickets that were created by the specified user or a member of the specified group, including tickets created on behalf of the specified user. Use me to return issues that were created by the current user.\n\nresolved date\n\nresolved date: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that were resolved on a specific date or within a specified time frame.\n\nsaved search\n\nsaved search: \u003csaved search name\u003e\n\nReturns issues that match the search criteria of a saved search. This attribute can also be referenced as a single value with the syntax #\u003csaved search name\u003e or -\u003csaved search name\u003e.\n\nsubmitter\n\nsubmitter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were submitted by the specified user or a member of the specified group on behalf of another user. Use me to return issues that were submitted by the current user.\n\nsummary\n\nsummary: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in the issue summary.\n\ntag\n\ntag: \u003ctag name\u003e\n\nReturns issues that match a specified tag. This attribute can also be referenced as a single value with the syntax #\u003ctag name\u003e or -\u003ctag name\u003e\n\nupdated\n\nupdated: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues where the most recent change occurred on a specific date or within a specified time frame.\n\nupdater\n\nupdater: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were last updated by the specified user or a member of the specified group. Use me to return issues to which you applied the last update.\n\nvcs changes\n\nvcs changes: \u003ccommit hash\u003e\n\nReturns issues that contain vcs changes that were applied in the commit object that is identified by the specified SHA-1 commit hash.\n\nvisible to\n\nvisible to: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that are visible to the specified user or a member of the specified group.\n\nvoter\n\nvoter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that have votes from the specified user or a member of the specified group.\n\nCustom Fields\n\nYou can find issues that are assigned specific values in a custom field. As with other issue attributes, you use the syntax attribute: value or attribute: -value. In this case, the attribute is the name of the custom field. In most cases, you can reference values directly with the # or - symbols.\n\nFor custom fields that are assigned an empty value, you can reference this property as a value. For example, to search for issues that are not assigned to a specific user, enter Assignee: Unassigned or #Unassigned. If the field is not assigned an empty value, find issues that do not store a value in the field with the syntax \u003cfield name\u003e: {No \u003cfield name\u003e} or has: -\u003cfield name\u003e.\n\nThis section lists the search attributes for default custom fields. Note that default fields and their values can be customized. The actual field names, values, and aliases may vary.\n\nAffected versions\n\nAffected versions: \u003cvalue\u003e\n\nReturns issues that were detected in a specific version of the product.\n\nAssignee\n\nAssignee: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns all issues that are assigned to the specified user or a member of the specified group.\n\nFix versions\n\nFix versions: \u003cvalue\u003e\n\nReturns issues that were fixed in a specific version of the product.\n\nFixed in build\n\nFixed in build: \u003cvalue\u003e\n\nReturns issues that were fixed in the specified build.\n\nPriority\n\nPriority: \u003cvalue\u003e\n\nReturns issues that match the specified priority level.\n\nState\n\nState: \u003cvalue\u003e | Resolved | Unresolved\n\nReturns issues that match the specified state.\n\nThe Resolved and Unresolved states cannot be assigned to an issue directly, as they are properties of specific values that are stored in the State field.\n\nBy default, Fixed, Won't fix, Duplicate, Incomplete, Obsolete, and Can't reproduce states are set as Resolved.\n\nThe Submitted, Open, In Progress, Reopened, and To be discussed states are set as Unresolved.\n\nSubsystem\n\nSubsystem: \u003cvalue\u003e\n\nReturns issues that are assigned to a specific subsystem within a project.\n\nType\n\nType: \u003cvalue\u003e\n\nReturns issues that match the specified issue type.\n\nIssue Links\n\nYou can search for issues based on the links that connect them to other issues. Search queries that reference a specific issue link type can be interpreted in two different ways:\n\nWhen specified as \u003clink type\u003e: \u003cissue ID\u003e, the query returns issues linked to the specified issue using this link type.\n\nUsing \u003clink type\u003e: (\u003csub-query\u003e), the query returns issues linked to any issue that matches the specified sub-query using this link type.\n\nWhen searching for linked issues, you can enter the outward name or inward name of any issue link type, then specify your search criteria.\n\nThis list contains search parameters for issue link types that are provided by default in YouTrack. The default issue link types can be customized, which means that the actual names may vary. You can also use this syntax to build search queries that refer to custom link types.\n\nlinks\n\nlinks: \u003cissue ID\u003e\n\nReturns issues that are linked to a target issue.\n\naggregate\n\naggregate \u003caggregation link type\u003e: \u003cissue ID\u003e\n\nReturns issues that are indirectly linked to a target issue. Use this search term to find, for example, issues that are parent issues for a parent issue or subtasks of issues that are also subtasks of a target issue. The results include any issue that is linked to the target issue using the specified link type, whether directly or indirectly.\n\nThis search argument is only compatible with aggregation link types.\n\nDepends on\n\nDepends on: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have depends on links to a target issue or any issue that matches the specified sub-query.\n\nDuplicates\n\nDuplicates: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have duplicates links to a target issue or any issue that matches the specified sub-query.\n\nIs duplicated by\n\nIs duplicated by: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have is duplicated by links to a target issue or any issue that matches the specified sub-query.\n\nIs required for\n\nIs required for: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have is required for links to a target issue or any issue that matches the specified sub-query.\n\nParent for\n\nParent for: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have parent for links to a target issue or any issue that matches the specified sub-query.\n\nRelates to\n\nRelates to: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have relates to links to a target issue or any issue that matches the specified sub-query.\n\nSubtask of\n\nSubtask of: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have subtask of links to a target issue or any issue that matches the specified sub-query.\n\nTime Tracking\n\nThere is a dedicated set of search attributes that you can use to find issues that contain time tracking data. These attributes look for specific values that have been added as work items to an issue.\n\nwork\n\nwork: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or phrase in a work item.\n\nwork author: \u003cuser\u003e\n\nReturns issues that have work items that were added by the specified user.\n\nwork type\n\nwork type: \u003cvalue\u003e\n\nReturns issues that have work items that are assigned the specified work type. The query work type: {No type} returns issues that have work items that are not assigned a work item type.\n\nwork date\n\nwork date: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that have work items that are recorded for the specified date or within the specified time frame.\n\ncustom work item attributes\n\nwork \u003cattribute name\u003e: \u003cattribute value\u003e\n\nReturns issues that have work items that are assigned the specified value for a specific work item attribute.\n\nSort Attributes\n\nYou can specify the sort order for the list of issues that are returned by the search query.\n\nYou can sort issues by any of the attributes on the following list. In the Search Query Grammar, these attributes represent the \u003cSortAttribute\u003e value.\n\nsort by\n\nsort by: \u003cvalue\u003e \u003csort order\u003e\n\nSorts issues that are returned by the query in the specified order.\n\nWhen you perform a text search, the results can be sorted by relevance. You cannot specify relevance as a sort attribute. For more information, see Sorting by Relevance.\n\nKeywords\n\nThere are a number of values that can be substituted with a keyword. When you use a keyword in a search query, you do not specify an attribute. A keyword is preceded by the number sign (#) or the minus operator. In the YouTrack Search Query Grammar, these keywords correspond to a \u003cSingleValue\u003e.\n\nme\n\nReferences the current user. This keyword can be used as a value for any attribute that accepts a user.\n\nWhen used as a single value (#me) the search returns issues that are assigned to, reported by, or commented by the current user.\n\nFor example, to find unresolved issues that are assigned to, reported by, or contain comments from the current user, enter #me -Resolved.\n\nThe results also include issues that contain references to the current user in any custom field that stores values as users. For example, you have a custom field Reviewed by that stores a user type. The search query #me -Resolved also includes issues that reference the current user in this custom field.\n\nmy\n\nAn alias for me.\n\nResolved\n\nThis keyword references the Resolved issue property. This property is set based on the current value or combination of values for any custom field that stores a state type. In the default State field, the Resolved property is enabled for the values Fixed, Won't fix, Duplicate, Incomplete, Obsolete, and Can't reproduce.\n\nFor projects that use multiple state-type fields, the Resolved property is only true when all the state-type fields are assigned values that are considered to be resolved.\n\nFor example, to find all resolved issues that were updated today, enter #Resolved updated: Today.\n\nUnresolved\n\nThis keyword references the Unresolved issue property. This property is set based on the current value or combination of values for any custom field that stores a state type. In the default State field, the Resolved property is disabled for the values Submitted, Open, In Progress, Reopened, and To be discussed.\n\nFor projects that use multiple state-type fields, the Unresolved property is true when any state-type field is assigned a value that is not considered to be resolved.\n\nFor example, to find all unresolved issues that are assigned to the user john.doe in the Test project, enter #Unresolved project: Test for: john.doe.\n\nReleased\n\nThis keyword references the Released property for values in a field that stores a version type. It can only be used together with the attribute name or alias for a version field. This means that it cannot be referenced as a single value.\n\nWith fields that store multiple values, the search query returns issues for which at least one of the versions that are stored in the field is marked as released.\n\nFor example, to find all issues in the Test project that are fixed in a version that has not yet been released, enter in: Test fixed in: -Released.\n\nArchived\n\nThis keyword references the Archived property for values in a field that stores a version type. It can only be used together with the attribute name or alias for a version field. This means that it cannot be referenced as a single value.\n\nWith fields that store multiple values, the search query only returns issues for which all the versions that are stored in the field are marked as archived.\n\nFor example, to find all issues in the Test project that are fixed in a version that has been archived, enter in: Test fixed in: Archived.\n\nOperators\n\nThe search query grammar applies default semantics to search queries that do not contain explicit logical operators.\n\nSearches that specify values for multiple attributes are treated as conjunctive. This means that the values are handled as if joined by an AND operator. For example, State: {In Progress} Priority: Critical returns issues that are assigned the specified state and priority.\n\nThis extends to queries that look for the presence or absence of a value for a specific attribute (has) in combination with a reverence to a specific value for the same attribute. The presence or absence of a value and the value itself are considered as separate attributes in the issue. For example, has: assignee Assignee: me only returns issues where the assignee is set and that assignee is you.\n\nFor text search, searches that include multiple words are treated as conjunctive. This means that the words are handled as if joined by an AND operator. For example, State: Open context usage returns issues that contain matching forms for both context and usage.\n\nSearches that include multiple values for a single attribute are treated as disjunctive. This means that the values are handled as if joined by an OR operator. For example, State: {In Progress}, {To be discussed} returns issues that are assigned either one or the other of these two states.\n\nYou can override the default semantics by applying explicit operators to the query.\n\nand\n\nThe AND operator combines matches for multiple search attributes to narrow down the search results. When you join search arguments with the AND operator, the resulting issues must contain matches for all the specified attributes. Use this operator for issue fields that store enum[*] types and tags.\n\nSearch arguments that are joined with an AND operator are always processed as a group and have a higher priority than other arguments that are joined with an OR operator in the query.\n\nHere are a few examples of search queries that contain AND operators:\n\nTo find issues in the Ktor project that are tagged as both Next build and to be tested, enter:\n\nin: Ktor and tag: {Next build} and tag: {to be tested}\n\nThe AND operator between the two tags ensures that the results only contain issues that have both tags.\n\nTo find all issues that are set as Critical priority in the Ktor project or are set as Major priority and are assigned to you in the Kotlin project, enter:\n\nin: Ktor #Critical or in: Kotlin #Major and for: me\n\nIf you were to remove the operators in this query, the references to the project and priority are parsed as disjunctive (OR) statements. The reference to the assignee (me) is then joined with a conjunctive (AND) statement. Instead of getting critical issues in the Ktor project plus a list of major-priority issues that you are assigned in Kotlin, you would only issues that are assigned to you that are either major or critical in either Ktor or Kotlin.\n\nor\n\nThe OR operator combines matches for multiple search attribute to broaden the search results.\n\nThis is very useful when searching for a term which has a synonym that might be used in an issue instead. For example, a search for lesson OR tutorial returns issues that contain matching forms for either \"lesson\" or \"tutorial\". If you remove the OR operator from the query, the search is performed conjunctively, which means the result would only include issues that contain matching forms for both words.\n\nHere's another example of a search query that contains an OR operator:\n\nTo find all issues in the Ktor project that are assigned to you or are tagged as to be tested in any project, enter:\n\nin: Ktor for: me or tag: {to be tested}\n\nParentheses\n\nUsing parentheses ( and ) combines various search arguments to change the order in which the attributes and operators are processed. The part of a search query inside the parentheses has priority and is always processed as a single unit.\n\nThe most common use of parentheses is to enclose two search arguments that are separated by an OR operator and further restrict the search results by joining additional search arguments with AND operators.\n\nAny time you use parentheses in a search query, you need to provide all the operators that join the parenthetical statement to neighboring search arguments. For example, the search query in: Kotlin #Critical (in: Ktor and for:me) cannot be processed. It must be written as in: Kotlin #Critical or (in: Ktor and for:me) instead.\n\nHere's an example of a search query that uses parentheses:\n\nTo find all issues that are assigned to you and are either assigned Critical priority in the Kotlin project or are assigned Major priority in the Ktor project, enter:\n\n(in: Kotlin #Critical or in: Ktor #Major) and for: me\n\nSymbols\n\nThe following symbols can be used to extend or refine a search query.\n\nSymbol\n\nDescription\n\nExamples\n\n-\n\nExcludes a subset from a set of search query results. When you use this symbol with a single value, do not use the number sign.\n\nTo find all unresolved issues except for issues with minor priority and sort the list of results by priority in ascending order, enter #unresolved -minor sort by: priority asc.\n\n#\n\nIndicates that the input represents a single value.\n\nTo find all unresolved issues in the MRK project that were reported by, assigned to, or commented by the current user, enter #my #unresolved in: MRK.\n\n,\n\nSeparates a list of values for a single attribute. Can be used in combination with a range.\n\nTo find all issues assigned to, reported or commented by the current user, which were created today or yesterday, enter #my created: Today, Yesterday.\n\n..\n\nDefines a range of values. Insert this symbol between the values that define the upper and lower ranges. The search results include the upper and lower bounds.\n\nTo find all issues fixed in version 1.2.1 and in all versions from 1.3 to 1.5, enter fixed in: 1.2.1, 1.3 .. 1.5.\n\nTo find all issues created between March 10 and March 13, 2018, enter created: 2018-03-10 .. 2018-03-13.\n\n*\n\nWildcard character. Its behavior is context-dependent.\n\nWhen used with the .. symbol, substitutes a value that determines the upper or lower bound in a range search. The search results are inclusive of the specified bound.\n\nWhen used in an attribute-based search, matches zero or more characters at the end of an attribute value. For more information, see Wildcards in Attribute-based Search.\n\nWhen used in text search, matches zero or more characters in a string. For more information, see Wildcards in Text Search.\n\nTo find all issues created on or before March 10, 2018, enter created: * .. 2018-03-10\n\nTo find issues that have tags that start with refactoring, enter tag: refactoring*.\n\nTo find unresolved issues that contain image attachments in PNG format, enter #Unresolved attachments: *.png.\n\n?\n\nMatches any single character in a string. You can only use this wildcard to search in attributes that store text. For more information, see Wildcards in Text Search.\n\nTo find issues that contain the words \"prioritize\" or \"prioritise\" in the issue description, enter description: prioriti?e\n\n{ }\n\nEncloses attribute values that contain spaces.\n\nTo find all issues with the Fixed state that have the tag to be tested, enter #Fixed tag: {to be tested}.\n\nDate and Period Values\n\nSeveral search attributes reference values that are stored as a date. You can search for dates as single values or use a range of values to define a period.\n\nSpecify dates in the format: YYYY-MM-DD or YYYY-MM or MM-DD. You also can specify a time in 24h format: HH:MM:SS or HH:MM. To specify both date and time, use the format: YYYY-MM-DD}}T{{HH:MM:SS. For example, the search query created: 2010-01-01T12:00 .. 2010-01-01T15:00 returns all issues that were created on 1 January 2010 between 12:00 and 15:00.\n\nPredefined Relative Date Parameters\n\nYou can also use pre-defined relative parameters to search for date values. The values for these parameters are calculated relative to the current date according to the time zone of the current user. The actual value for each parameter is shown in the query assist panel.\n\nThe following relative date parameters are supported:\n\nParameter\n\nDescription\n\nNow\n\nThe current instant.\n\nToday\n\nThe current calendar day.\n\nTomorrow\n\nThe next calendar day.\n\nYesterday\n\nThe previous calendar day.\n\nSunday\n\nThe calendar Sunday for the current week.\n\nMonday\n\nThe calendar Monday for the current week.\n\nTuesday\n\nThe calendar Tuesday for the current week.\n\nWednesday\n\nThe calendar Wednesday for the current week.\n\nThursday\n\nThe calendar Thursday for the current week.\n\nFriday\n\nThe calendar Friday for the current week.\n\nSaturday\n\nThe calendar Saturday for the current week.\n\n{Last working day}\n\nThe most recent working day as defined by the Workdays that are configured in the settings on the Time Tracking page in YouTrack.\n\n{This week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the current week.\n\n{Last week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the previous week.\n\n{Next week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the next week.\n\n{Two weeks ago}\n\nThe period from 00:00 Monday to 23:59 Sunday for the calendar week two weeks prior to the current date.\n\n{Three weeks ago}\n\nThe period from 00:00 Monday to 23:59 Sunday for the calendar week three weeks prior to the current date.\n\n{This month}\n\nThe period from the first day to the last day of the current calendar month.\n\n{Last month}\n\nThe period from the first day to the last day of the previous calendar month.\n\n{Next month}\n\nThe period from the first day to the last day of the next calendar month.\n\nOlder\n\nThe period from 1 January 1970 to the last day of the month two months prior to the current date.\n\nCustom Date Parameters\n\nIf the predefined date parameters don't help you find issues that matter most to you, define your own date range in your search query. Here are a few examples of the queries you can write with custom date parameters:\n\nFind issues that have new comments added in the last seven days:\n\ncommented: {minus 7d} .. Today\n\nFind issues that were updated in the last two hours:\n\nupdated: {minus 2h} .. *\n\nFind unresolved issues that are at least one and a half years old:\n\ncreated: * .. {minus 1y 6M} #Unresolved\n\nFind issues that are due in five days:\n\nDue Date: {plus 5d}\n\nTo define a custom time frame in your search queries, use the following syntax:\n\nTo specify dates or times in the past, use minus.\n\nTo specify dates or times in the future, use plus.\n\nSpecify the time frame as a series of whole numbers followed by a letter that represents the unit of time. Separate each unit of time with a space character. For example:\n\n2y 3M 1w 2d 12h\n\nQueries that specify hours will filter for events that took place during the specified hour. For example, if it is currently 15:35, a query that is written as created: {minus 48h} returns issues that were created two days ago, at any time between 3 and 4 PM. Meanwhile, a query that is written as created: {minus 2d} returns all issues that were created two days ago at any time between midnight and 23:59.\n\nThis level of precision only applies to hours. A query that references the unit of time as 14d returns exactly the same results as 2w.\n\nSearch queries that specify units of time shorter than one hour (minutes, seconds) are not supported.\n\nSearch Query Grammar\n\nThis page provides a BNF description of the YouTrack search query grammar.\n\n\u003cSearchRequest\u003e ::= \u003cOrExpression\u003e \u003cOrExpession\u003e ::= \u003cAndExpression\u003e ('or' \u003cAndExpression\u003e)* \u003cAndExpression\u003e ::= \u003cAndOperand\u003e ('and' \u003cAndOperand\u003e)* \u003cAndOperand\u003e ::= '('\u003cOrExpression\u003e? ')' | Term \u003cTerm\u003e ::= \u003cTermItem\u003e* \u003cTermItem\u003e ::= \u003cQuotedText\u003e | \u003cNegativeText\u003e | \u003cPositiveSingleValue\u003e | \u003cNegativeSingleValue\u003e | \u003cSort\u003e | \u003cHas\u003e | \u003cCategorizedFilter\u003e | \u003cText\u003e \u003cCategorizedFilter\u003e ::= \u003cAttribute\u003e ':' \u003cAttributeFilter\u003e (',' \u003cAttributeFilter\u003e)* \u003cAttribute\u003e ::= \u003cname of issue field\u003e \u003cAttributeFilter\u003e ::= ('-'? \u003cValue\u003e ) | ('-'? \u003cValueRange\u003e) | \u003cLinkedIssuesQuery\u003e \u003cLinkedIssuesQuery\u003e ::= ( \u003cOrExpression\u003e ) \u003cValueRange\u003e ::= \u003cValue\u003e '..' \u003cValue\u003e \u003cPositiveSingleValue\u003e ::= '#'\u003cSingleValue\u003e \u003cNegativeSingleValue\u003e ::= '-'\u003cSingleValue\u003e \u003cSingleValue\u003e ::= \u003cValue\u003e \u003cSort\u003e ::= 'sort by:' \u003cSortField\u003e (',' \u003cSortField\u003e)* \u003cSortField\u003e ::= \u003cSortAttribute\u003e ('asc' | 'desc')? \u003cHas\u003e ::= 'has:' \u003cAttribute\u003e (',' \u003cAttribute\u003e)* \u003cQuotedText\u003e ::= '\"' \u003ctext without quotes\u003e '\"' \u003cNegativeText\u003e ::= '-' \u003cQuotedText\u003e \u003cText\u003e ::= \u003ctext without parentheses\u003e \u003cValue\u003e ::= \u003cComplexValue\u003e | \u003cSimpleValue\u003e \u003cSimpleValue\u003e ::= \u003cvalue without spaces\u003e \u003cComplexValue\u003e ::= '{' \u003cvalue (can have spaces)\u003e '}'\n\nGrammar is case-insensitive.\n\nFor a complete list of search attributes, see Issue Attributes.\n\nTo see sample queries for common use cases, see Sample Search Queries.\n\n11 November 2025",
    "link": "https://www.jetbrains.com/help/youtrack/cloud/search-and-command-attributes.html",
    "snippet": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and ...",
    "title": "Search Query Reference | YouTrack Cloud Documentation - JetBrains"
  },
  {
    "content_readable": "Introduced in 2020, the GitHub user profile README allow individuals to give a long-form introduction. This multi-part tutorial explains how I setup my own profile to create dynamic content to aid discovery of my projects:\n\nwith the Liquid template engine and Shields (Part 1 of 4)\nusing GitHub's GraphQL API to query dynamic data about all my repos (keep reading below)\nfetching RSS and Social cards from third-party sites (Part 3 of 4)\nautomating updates with GitHub Actions (Part 4 of 4)\n\nYou can visit github.com/j12y to see the final result of what I came up with for my own profile page.\n\nThe GitHub Repo Gallery\n\nThe intended behavior for my repo gallery is to create something similar to pinned repositories but with a bit more visual pizzazz to identify what the projects are about.\n\nIn addition to source code, the repo can have metadata associated with it:\n\n✔️ Name of the repository\n✔️ Short description of the project\n✔️ Programming language used for the project\n✔️ List of tags / topics\n✔️ Image that can be used for social cards\n\nAbout\n\nThe About has editable fields to set the description and topics.\n\nSettings\n\nThe Settings includes a place to upload an image for social media preview cards.\n\nIf you don't set a preview card image, GitHub will generate one automatically that includes some basic profile statistics and your user profile image.\n\nGetting Started with the GitHub REST API\n\nThe way I structured this project is to build a library of any functions related to querying GitHub in src/gh.ts. I used a .env file to store my personal access (classic) token for authentication during local development.\n\n├── package.json\n├── .env\n├── src\n│   ├── app.ts\n│   ├── gh.ts\n│   └── template\n│       ├── README.liquid\n│       ├── contact.liquid\n│       └── gallery.liquid\n└── tsconfig.json\n\n\nI started by using REST endpoints with the Octokit library and TypeScript bindings.\n\n// src/gh.ts\nimport { Octokit } from 'octokit';\nimport { RestEndpointMethodTypes } from '@octokit/plugin-rest-endpoint-methods'\nconst octokit = new Octokit({ auth: process.env.TOKEN});\n\nexport class GitHub {\n    // GET /users/{user}\n    // https://docs.github.com/en/rest/users/users#get-a-user\n    async getUserDetails(user: string): Promise\u003cRestEndpointMethodTypes['users']['getByUsername']['response']['data']\u003e {\n        const { data } = await octokit.rest.users.getByUsername({\n            username: user\n        });\n\n        return data;\n    };\n}\n\n\nFrom src/app.ts I initialize the GithHub class, fetch the results, and can inspect the data being returned as a way to get comfortable with the various endpoints.\n\n// src/app.ts\nimport dotenv from 'dotenv';\nimport { GitHub } from \"./gh\";\n\nexport async function main() {\n  dotenv.config();\n  const gh = new GitHub()\n\n  const details = await gh.getUserDetails();\n  console.log(details);\n}\nmain();\n\n\nI typically get started on projects with simple tests like this to make sure all the various pieces to an integration can be configured and work together before getting too far.\n\nUse the GitHub GraphQL Endpoint\n\nTo get the data needed for the gallery layout, it would be necessary to make multiple calls to REST endpoints. In addition there is some data not yet available from the REST endpoint at all.\n\nSwitching to query using the GitHub GraphQL interface becomes helpful. This single endpoint can process a number of queries and give precise control over the data needed.\n\n💡 The GitHub GraphQL Explorer was fundamentally useful for me to get the right queries defined\n\nThis query needs authorization with the personal access token to fetch profile details about followers similar to some of the details returned from the REST endpoints.\n\n// src/gh.ts\n\nconst { graphql } = require(\"@octokit/graphql\")\n\nexport class GitHub \n    // https://docs.github.com/en/graphql\n    graphqlWithAuth = graphql.defaults({\n        headers: {\n            authorization: `token ${process.env.TOKEN}`\n        }\n    })\n\n    async getProfileOverview(name: string): Promise\u003cany\u003e {\n        const query = `\n            query getProfileOverview($name: String!) { \n                user(login: $name) { \n                    followers(first: 100) {\n                        totalCount\n                        edges {\n                            node {\n                                login\n                                name\n                                twitterUsername\n                                email\n                            }\n                        }\n                    }\n                }\n            }\n        `;\n        const params = {'name': name};\n\n        return await this.graphqlWithAuth(query, params);\n    }\n}\n\n\nThere are other resources such as Learn GraphQL if you haven't written many queries yet which explains the basics around syntax, schemas, and types.\n\nGetting used to GitHub's GraphQL schema primarily involves walking a series of edges to find linked nodes for objects of interest and their data attributes. In this case, I started by querying a user profile, finding the list of linked followers, and then inspecting their corresponding node's login, name, and email address.\n\n   ┌────────────┐\n   │    user    │\n   └─────┬──────┘\n         │\n         └──followers\n               │\n               ├─── totalCount\n               │\n               └─── edges\n                     │\n                     └── node\n\n\n\nFaceted Search by Topic Frequency\n\nI often want to find repositories by a topic. The user interface makes it easy to filter among many repositories by programming language such as python but unless you know which topics are relevant can become hit or miss. Was it nlp or nltk I used to categorize related repositories. Did I use dolby or dolbyio to identify repos I have for work projects.\n\nA faceted search that narrows down the number of matching repositories can be helpful for finding relevant projects like this. Given topics on GitHub are open-ended and not constrained to fixed values, it can be easy to accidentally categorize repos with variations like lambda and aws-lambda such that searches only identify partial results.\n\nTo address this, a GraphQL query gathering topics by frequency of usage within an organization or individual account can help with identifying the most useful topics.\n\nThe steps for this would be:\n\nQuery repository topics\nProcess results to group topics by frequency\nUse a template to render the gallery\n\n1 - Query Repository Topics\n\nI used the following GraphQL query to fetch my repositories and their corresponding topics.\n\nconst query = `\n    query getReposOverview($name: String!) {\n        user(login: $name) {\n            repositories(first: 100 ownerAffiliations: OWNER) {\n                edges {\n                    node {\n                        name\n                        url\n                        description\n                        openGraphImageUrl\n                        repositoryTopics(first: 100) {\n                            edges {\n                                node {\n                                    topic {\n                                        name\n                                    }\n                                }\n                            }\n                        }\n                        primaryLanguage {\n                            name\n                        }\n                    }\n                }\n            }\n        }\n    }\n`;\n\n\nThis query starts by filtering by user owned repositories (not counting forks) along with the metadata such as the social image.\n\n2 - Process Results and Group Topics by Frequency\n\nIterating over the results of the query the convention used was to look for anything with the topic github-gallery as something to be featured in the gallery. We also get a count of usage for each of the other topics and programming languages.\n\nvar topics: {[id: string]: number } = {};\nvar languages: {[id: string]: number } = {};\nvar gallery: {[id: string]: any } = {};\n\nconst repos = await gh.getReposOverview(user);\nfor (let repo of repos.user.repositories.edges) {\n  // Count occurrences of each topic\n  repo.node.repositoryTopics.edges.forEach((topic: any) =\u003e {\n    if (topic.node.topic.name == 'github-gallery') {\n      gallery[repo.node.name] = repo;\n    } else {\n      topics[topic.node.topic.name] = topic.node.topic.name in topics ? topics[topic.node.topic.name] + 1 : 1;\n    }\n  });\n\n  // Count and include count of language used\n  if (repo.node.primaryLanguage) {\n    languages[repo.node.primaryLanguage.name] = repo.node.primaryLanguage.name in languages ? languages[repo.node.primaryLanguage.name] + 1 : 1;\n  }\n}\n\n\n3 - Use a template to render the gallery\n\nThe topics are ordered by how often they are used. From the previous post on setting up a dynamic profile, I'm passing scope to the liquid engine for any data to be made available in a template.\n\n  // Share topics sorted by frequency of use for filtering repositories\n  // from the organization\n  scope['topics'] = Object.entries(topics).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n  scope['languages'] = Object.entries(languages).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n\n  // Gather topics across repos\n  scope['gallery'] = Object.values(gallery);\n\n\n\nThe repository page on GitHub uses query parameters to sort and filter, so items like topic:nltk can be passed directly in the URL to load a filtered view of repositories. The shields create a nice looking button for navigating to the topic, and use of icons for programming languages helps find relevant code samples.\n\n\u003cp\u003eExplore some of my projects: \u003cbr/\u003e\n{% for language in languages %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=language%3A{{language[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ language[0] }}-{{ language[1] }}-lightgrey?logo={{ language[0] }}\u0026label={{ language[0] }}\u0026labelColor=000000\" alt=\"{{ language[0] }}\"/\u003e\u003c/a\u003e {% endfor %}\n{% for topic in topics %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/static/v1?label={{topic[0]}}\u0026message={{ topic[1] }}\u0026labelColor=blue\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\n\nThe presentation includes a 3-column row in a table for displaying the metadata about each featured gallery project. This could display all repositories, but limiting to one or two rows seems sensible for managing screen space.\n\n{% for tile in gallery limit:3 %}\n\u003ctd width=\"25%\" valign=\"top\" style=\"padding-top: 20px; padding-bottom: 20px; padding-left: 30px; padding-right: 30px;\"\u003e\n\u003ca href=\"{{ tile.node.url }}\"\u003e\u003cimg src=\"{{ tile.node.openGraphImageUrl }}\"/\u003e\u003c/a\u003e\n\u003cp\u003e\u003cb\u003e\u003ca href=\"{{ tile.node.url }}\"\u003e{{ tile.node.name }}\u003c/b\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e{{ tile.node.description }}\u003cbr/\u003e\n{% for topic in tile.node.repositoryTopics.edges %} \u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic.node.topic.name }}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ topic.node.topic.name | replace: \"-\", \"--\" }}-blue?style=pill\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\u003c/td\u003e\n{% endfor %}\n\n\nWith all of that put together, we now have a gallery that displays a picture along with the name, description, and tags. The picture can highlight a user interface, architectural diagram, or some other branded visual to help identify the purpose of the project visually.\n\nWe can also use this to maintain our list of topics and make finding relevant topics for an audience easier to discover.\n\nLearn more\n\nI hope this overview helps with getting yourself sorted. The next article will dive into some of the other ways of aggregating content.\n\nFetching RSS and Social Cards for GitHub Profile (Part 3 of 4)\nAutomating GitHub Profile Updates with Actions (Part 4 of 4)\n\nDid this help you get your own profile started? Let me know and follow to get notified about updates.",
    "link": "https://dev.to/j12y/query-github-repo-topics-using-graphql-35ha",
    "snippet": "Creating a customized user profile page for GitHub to showcase work projects and make navigation to relevant topics easier.",
    "title": "Query GitHub Repo Topics Using GraphQL - DEV Community"
  },
  {
    "content_readable": "Updated\n\n4 days ago\n\nWith millions of conversations happening all over the web each day, it can be a long and tedious task trying to get more relevant mentions and tighten the scope of your query, but with the help of Advanced Topic Query, it can be at your fingertips.\n\nIn Social Listening, you have the option to create an advanced query that is not limited to ANY, ALL, or NONE formatting of query building. Advanced query builder can be used to form complex text queries which are not possible with a normal query builder.\n\nWhat is an Advanced Topic Query?\n\nAdvanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more.\n\nBy using advanced query you can pinpoint relevant information which is not possible with basic topic query.\n\nIt gives you the power to find the needle in a haystack.\n\n​\n\nBasic Topic Query v/s Advanced Topic Query\n\nWith more operators to use you can fetch conversations by language, geography, social media channel, volume, author, #listening, @account monitoring, user segment, and much more, it can give you access to more actionable insights.\n\nIn Basic Query, you can only use boolean operators like OR/ NOT/ AND/ along with NEAR. On the other hand, in Advanced Topic Query, it gives you access to use OR with/ inside AND, NOT (nested and within operator use cases), advanced operators, exact match operators etc.\n\nLet's see the use cases where advanced query will help in getting more insightful mentions –\n\nUse case #1: To search \"pepsi\" OR \"drink\" along with \"cups\".\n\nBasic Query\n\nAdvancd Query\n\nUse case #2: To get mentions of \"pepsi\" along with \"coke\" or \"sprite\" but not \"miranda\" with people having \"follower count\" between 100 to 1000 on \"twitter\".\n\nBasic Query\n\nAdvanced Query\n\nNot feasible in the basic Topic query\n\nThis is where we need the advanced Topic query.​\n\nHow to create an advanced Topic query?\n\nClick the New Tab icon. Under Sprinklr Insights, Click Topics within Listening.\n\nOn the Topics window, click Add Topic in the top right corner. Fill in the required fields and click Create.\n\nIn the Setup Query tab of Create New Topic window, select Advanced Query in the query section.\n\n​\n\nType your query in the Advanced Query field with the required operators and syntax.\n\nClick Save.\n\nTip: While using Instagram as a Listening Source, be sure that your query keywords include hashtags.\n\nWhich operators to use for building Topic queries?\n\nOperators for Topic queries\n\nIn creation of advanced queries along with boolean operators OR/ AND/ NOT/ etc, Sprinklr also supports operator types –\n\nSearch Operators\n\nExact Match Operators\n\nOperators for Getting Post Replies/Comments​\n\nSprinklr provides its user edge by giving them power to use Keywords List inside advanced query along with Operators mentioned.\n\nCreate query using Topic query operators\n\nFollowing are some most used operator examples and their results –\n\nOperator\n\nExample\n\nResult\n\nhello\n\nSearch for the term \"hello\"\n\nsocial sprinklr\n\nSearch for the phrases \"social\" and \"sprinklr\"\n\n​\n\nNote: Using this will show preview but topic can not be saved as it will show error, Use \"Social Sprinklr\" or (Social AND/OR/ NOT/ NEAR Sprinklr) to eliminate error.\n\nAND\n\nsocial AND sprinklr\n\nSearch for \"social\" and \"sprinklr\" anywhere within the complete message, irrespective of keywords between them\n\nOR\n\nsocial OR sprinklr\n\nSearch for \"social\" or \"sprinklr\"\n\nNOT\n\n\"social media\" NOT \"facebook\"\n\nSearch for results that contain \"social media\" but not \"facebook\"\n\n~\n\n\"social media\"~10\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNEAR\n\nsocial NEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNote: This operator can be used with keyword lists.\n\nONEAR\n\nsocial ONEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other in an ordered way\n\nNote: This operator searches social ahead of media.\n\ntitle\n\ntitle: (\"social media\")\n\nSearch for social media in the title of the message\n\nNote: It is mostly used for News, blogs, reviews and other sites.\n\nauthor\n\nauthor: \"social_media\"\n\nFetches all the mentions from author name: social_media\n\nSome other operators which are supported by Sprinklr are –\n\nProximity: It is used to define proximity or distance between 2 keywords only, whereas, NEAR can be used to define proximity between two keywords as well as keyword lists.\n\nOnear (Ordered Near): It sets the order in which the keywords will appear. For example, Keyword-List1 ONEAR/10 Keyword-List2 will ensure keywords from Keyword-List1 appear first and then Keyword-List2 keywords will follow within space of maximum 10.\n\nStep by step guide to make advanced Topic query\n\nUse case\n\nTo write query fetching mentions of ZARA –\n\n​\n\n(# listening is used for instagram listening)\n\nGetting mention along with clothing or fashion related terms only –\n\nRemoving profanity from mention (use case specific) –\n\nRemoving profanity from mention (use case specific) –\n\nAs social media has lots of profane words you can also remove it by making a keyword list and negating it from query –\n\nFiltering Mentions in English –\n\n​\n\nApplying source input as Twitter –\n\nGetting mentions of those users which have followers between 100 to 1000 –\n\n​\n\nAdvanced example showcasing use of Topic query operators and keyword list –\n\nBest practices while using Advanced Query\n\nUse of Parentheses\n\n​Parentheses are not necessary to enclose a search query but can be useful while grouping operations together for more complex queries.\n\n​\n\nFor example, if you want to return results that mention Samsung or Apple phones, and also want to query content that mentions phones along with either Apple or Samsung, you could use parentheses around Apple and Samsung to group three keywords together, as shown below –\n\nphone AND (Apple OR Samsung)\n\n​\n\nUse of parentheses within brackets, is further explained below with an example –\n\n[(internet of things ~3) OR iot OR internetofthings) AND (robots OR robot OR #robot)] NOT [things]\n\nTip: You can also use parentheses within brackets to set off additional operations within the Advanced Query field. The end result should look similar to the result summary of a basic query, built using multiple operations within a single section.\n\n\nAs a part of the rest of the query, this will perform the following operations –\n\nSearch for posts that contain the phrase \"internet of things\" or \"#internetofthings\"\n\nFrom within those results, keep any result that also says \"robots\" or \"robot\" or \"#robot\" within three words (a proximity search) of either \"internet of things\" or \"iot\" or \"internetofthings\".\n\nDiscard any results that just have the phrase \"things\" within.\n\nParentheses nested within brackets intend to set off different operations as isolated processes. In the previous example, if you build an Advanced Query that states [(internet of things OR iot OR internet of things) AND (robots OR robot OR #robot)] your query will return results that contain ANY of the first three terms and the second three terms.\n\nHowever, if you build an Advanced Query that states [internet of things OR iot OR internet of things AND robots OR robot OR #robot], your query will return any result that contains the phrase \"internet of things\" or the word \"iot\" or the word \"robot\" or the hashtag #robot or specifically the phrase \"internet of things\" within the same message as the word \"robots\".\n\nNote:\n\nYou cannot use a \"NOT\" statement with an \"OR\" statement.\n\n\nExample:\n( social OR NOT media ) ❌\n( social NOT media ) ✅\n\n(( social OR ( media NOT facebook )) ✅\n\nWhy?\n\nQuery should not contain \"NOT\" terms in \"OR\" with other terms, \"NOT\" clauses should be used in \"AND\" with other terms, using \"NOT\" in \"OR\" will bring too much data.\n\nUse of Quotation marks\n\nQuotation marks can be used for phrases in which you are looking for an exact match of those particular words in a specific order. Using parentheses or quotation marks for single-word queries is not mandatory.\n\nUse straight quotation marks ( \" \" ) for outlining phrases within it. The use of curved quotation marks (“ ”) will not produce your desired results.\n\nParentheses are generally used to group keywords or phrases joined by one or more operators together, but with other keywords involved, parentheses and quotations would act differently. For example –\n\nVersion 1: \"Phil Schiller\" AND \"Apple Marketing\" will return results for content with the exact phrase Phil Schiller (or phil schiller) and the exact phrase Apple Marketing (or apple marketing).\n\nNote: Here exact does not mean case sensitive as in the case of exactMessage Operator.\n\nExample: exactMessage: (\"Phil Schiller\" AND \"Apple Marketing\"), which will fetch results for phrase Phil Schiller (not phil schiller) and the exact phrase Apple Marketing (not apple marketing).\n\n\nVersion 2: \"Phil Schiller\" AND (Apple OR Marketing) will return results for content with the phrase \"Phil Schiller\" (together) and at least one of the words, Apple or Marketing.\n\nHandling for Broad \u0026 Ambiguous Keywords\n\nIt is very important to not use/reduce use of broad keywords in advanced queries. Broader keywords will fetch mentions that are unrelated to topic of interest, and eventually hinder dashboard/insights\n\nFor all keywords used in an advanced topic query, ensure they are directly related to the topic of interest.\n\nIn case keywords are broad but relevant to topic, they should be tied to some relevant keywords related to that topic, by using NEAR Operators\n\nExample: Robot is an important keyword for Robot Company. However just using this keyword will fetch irrelevant keywords as it’s a broad keyword used for other entities as well (Robot Street, etc).\n\nInstead of using just Robot keyword, we should use: Robot NEAR/4 (Technology OR “machine” OR # tech OR IOT OR “Internet of things” ….)\n\nNote how keywords related to Robot are used with NEAR Operator. Related keywords could be related entities, industry keywords, parent company, country keywords, etc.\n\nFrequently asked questions\n\n​\n\nIs it compulsory to put quotation marks around phrases like \"apple music\" or can we use apple music directly?\n\nHow can I eliminate posts with many spam #’s or @’s?\n\nCan exact match or parent operators be used in advanced query?\n\nWhy am I able to see mentions in preview during making of topic but not in dashboard?\n\nDuring listening to @ mentions a lot of spam mentions are also getting tagged along, e.g. like wanting to get mentions of @tom but messages of @tom_fan56 are also coming. How to remove these irrelevant mentions?\n\nIf I write query as “tom” will it also fetch mentions such as tom_jerry / @tom / #tom ?\n\n​",
    "link": "https://www.sprinklr.com/help/articles/faqs-and-advanced-usecases/create-an-advanced-topic-query/646331628ea3c9635cf36711",
    "snippet": "Advanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more. By ...",
    "title": "‎Create an Advanced Topic Query | Sprinklr Help Center"
  },
  {
    "content_readable": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL). To learn about the query language used by Resource Graph, start with the tutorial for KQL.\n\nThis article covers the language components supported by Resource Graph:\n\nUnderstanding the Azure Resource Graph query language\n\nResource Graph tables\nExtended properties\nResource Graph custom language elements\n\nShared query syntax (preview)\nSupported KQL language elements\n\nSupported tabular/top level operators\nQuery scope\nEscape characters\nNext steps\n\nResource Graph tables\n\nResource Graph provides several tables for the data it stores about Azure Resource Manager resource types and their properties. Resource Graph tables can be used with the join operator to get properties from related resource types.\n\nResource Graph tables support the join flavors:\n\ninnerunique\ninner\nleftouter\nfullouter\n\nResource Graph table Can join other tables? Description\nAdvisorResources Yes Includes resources related to Microsoft.Advisor.\nAlertsManagementResources Yes Includes resources related to Microsoft.AlertsManagement.\nAppServiceResources Yes Includes resources related to Microsoft.Web.\nAuthorizationResources Yes Includes resources related to Microsoft.Authorization.\nAWSResources Yes Includes resources related to Microsoft.AwsConnector.\nAzureBusinessContinuityResources Yes Includes resources related to Microsoft.AzureBusinessContinuity.\nChaosResources Yes Includes resources related to Microsoft.Chaos.\nCommunityGalleryResources Yes Includes resources related to Microsoft.Compute.\nComputeResources Yes Includes resources related to Microsoft.Compute Virtual Machine Scale Sets.\nDesktopVirtualizationResources Yes Includes resources related to Microsoft.DesktopVirtualization.\nDnsResources Yes Includes resources related to Microsoft.Network.\nEdgeOrderResources Yes Includes resources related to Microsoft.EdgeOrder.\nElasticsanResources Yes Includes resources related to Microsoft.ElasticSan.\nExtendedLocationResources Yes Includes resources related to Microsoft.ExtendedLocation.\nFeatureResources Yes Includes resources related to Microsoft.Features.\nGuestConfigurationResources Yes Includes resources related to Microsoft.GuestConfiguration.\nHealthResourceChanges Yes Includes resources related to Microsoft.Resources.\nHealthResources Yes Includes resources related to Microsoft.ResourceHealth.\nInsightsResources Yes Includes resources related to Microsoft.Insights.\nIoTSecurityResources Yes Includes resources related to Microsoft.IoTSecurity and Microsoft.IoTFirmwareDefense.\nKubernetesConfigurationResources Yes Includes resources related to Microsoft.KubernetesConfiguration.\nKustoResources Yes Includes resources related to Microsoft.Kusto.\nMaintenanceResources Yes Includes resources related to Microsoft.Maintenance.\nManagedServicesResources Yes Includes resources related to Microsoft.ManagedServices.\nMigrateResources Yes Includes resources related to Microsoft.OffAzure.\nNetworkResources Yes Includes resources related to Microsoft.Network.\nPatchAssessmentResources Yes Includes resources related to Azure Virtual Machines patch assessment Microsoft.Compute and Microsoft.HybridCompute.\nPatchInstallationResources Yes Includes resources related to Azure Virtual Machines patch installation Microsoft.Compute and Microsoft.HybridCompute.\nPolicyResources Yes Includes resources related to Microsoft.PolicyInsights.\nRecoveryServicesResources Yes Includes resources related to Microsoft.DataProtection and Microsoft.RecoveryServices.\nResourceChanges Yes Includes resources related to Microsoft.Resources.\nResourceContainerChanges Yes Includes resources related to Microsoft.Resources.\nResourceContainers Yes Includes management group (Microsoft.Management/managementGroups), subscription (Microsoft.Resources/subscriptions) and resource group (Microsoft.Resources/subscriptions/resourcegroups) resource types and data.\nResources Yes The default table if a table isn't defined in the query. Most Resource Manager resource types and properties are here.\nSecurityResources Yes Includes resources related to Microsoft.Security.\nServiceFabricResources Yes Includes resources related to Microsoft.ServiceFabric.\nServiceHealthResources Yes Includes resources related to Microsoft.ResourceHealth/events.\nSpotResources Yes Includes resources related to Microsoft.Compute.\nSupportResources Yes Includes resources related to Microsoft.Support.\nTagsResources Yes Includes resources related to Microsoft.Resources/tagnamespaces.\n\nFor a list of tables that includes resource types, go to Azure Resource Graph table and resource type reference.\n\nNote\n\nResources is the default table. While querying the Resources table, it isn't required to provide the table name unless join or union are used. But the recommended practice is to always include the initial table in the query.\n\nTo discover which resource types are available in each table, use Resource Graph Explorer in the portal. As an alternative, use a query such as \u003ctableName\u003e | distinct type to get a list of resource types the given Resource Graph table supports that exist in your environment.\n\nThe following query shows a simple join. The query result blends the columns together and any duplicate column names from the joined table, ResourceContainers in this example, are appended with 1. As ResourceContainers table has types for both subscriptions and resource groups, either type might be used to join to the resource from Resources table.\n\nResources\n| join ResourceContainers on subscriptionId\n| limit 1\n\n\nThe following query shows a more complex use of join. First, the query uses project to get the fields from Resources for the Azure Key Vault vaults resource type. The next step uses join to merge the results with ResourceContainers where the type is a subscription on a property that is both in the first table's project and the joined table's project. The field rename avoids join adding it as name1 since the property already is projected from Resources. The query result is a single key vault displaying type, the name, location, and resource group of the key vault, along with the name of the subscription it's in.\n\nResources\n| where type == 'microsoft.keyvault/vaults'\n| project name, type, location, subscriptionId, resourceGroup\n| join (ResourceContainers | where type=='microsoft.resources/subscriptions' | project SubName=name, subscriptionId) on subscriptionId\n| project type, name, location, resourceGroup, SubName\n| limit 1\n\n\nNote\n\nWhen limiting the join results with project, the property used by join to relate the two tables, subscriptionId in the above example, must be included in project.\n\nExtended properties\n\nAs a preview feature, some of the resource types in Resource Graph have more type-related properties available to query beyond the properties provided by Azure Resource Manager. This set of values, known as extended properties, exists on a supported resource type in properties.extended. To show resource types with extended properties, use the following query:\n\nResources\n| where isnotnull(properties.extended)\n| distinct type\n| order by type asc\n\n\nExample: Get count of virtual machines by instanceView.powerState.code:\n\nResources\n| where type == 'microsoft.compute/virtualmachines'\n| summarize count() by tostring(properties.extended.instanceView.powerState.code)\n\n\nResource Graph custom language elements\n\nShared query syntax (preview)\n\nAs a preview feature, a shared query can be accessed directly in a Resource Graph query. This scenario makes it possible to create standard queries as shared queries and reuse them. To call a shared query inside a Resource Graph query, use the {{shared-query-uri}} syntax. The URI of the shared query is the Resource ID of the shared query on the Settings page for that query. In this example, our shared query URI is /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS. This URI points to the subscription, resource group, and full name of the shared query we want to reference in another query. This query is the same as the one created in Tutorial: Create and share a query.\n\nNote\n\nYou can't save a query that references a shared query as a shared query.\n\nExample 1: Use only the shared query:\n\nThe results of this Resource Graph query are the same as the query stored in the shared query.\n\n{{/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS}}\n\n\nExample 2: Include the shared query as part of a larger query:\n\nThis query first uses the shared query, and then uses limit to further restrict the results.\n\n{{/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS}}\n| where properties_storageProfile_osDisk_osType =~ 'Windows'\n\n\nSupported KQL language elements\n\nResource Graph supports a subset of KQL data types, scalar functions, scalar operators, and aggregation functions. Specific tabular operators are supported by Resource Graph, some of which have different behaviors.\n\nSupported tabular/top level operators\n\nHere's the list of KQL tabular operators supported by Resource Graph with specific samples:\n\nKQL Resource Graph sample query Notes\ncount Count key vaults\ndistinct Show resources that contain storage\nextend Count virtual machines by OS type\njoin Key vault with subscription name Join flavors supported: innerunique, inner, leftouter, and fullouter. Limit of three join or union operations (or a combination of the two) in a single query, counted together, one of which might be a cross-table join. If all cross-table join use is between Resource and ResourceContainers, then three cross-table join are allowed. Custom join strategies, such as broadcast join, aren't allowed. For which tables can use join, go to Resource Graph tables.\nlimit List all public IP addresses Synonym of take. Doesn't work with Skip.\nmvexpand Legacy operator, use mv-expand instead. RowLimit max of 2,000. The default is 128.\nmv-expand List Azure Cosmos DB with specific write locations RowLimit max of 2,000. The default is 128. Limit of 3 mv-expand in a single query.\norder List resources sorted by name Synonym of sort\nparse Get virtual networks and subnets of network interfaces It's optimal to access properties directly if they exist instead of using parse.\nproject List resources sorted by name\nproject-away Remove columns from results\nsort List resources sorted by name Synonym of order\nsummarize Count Azure resources Simplified first page only\ntake List all public IP addresses Synonym of limit. Doesn't work with Skip.\ntop Show first five virtual machines by name and their OS type\nunion Combine results from two queries into a single result Single table allowed: | union [kind= inner|outer] [withsource=ColumnName] Table. Limit of three union legs in a single query. Fuzzy resolution of union leg tables isn't allowed. Might be used within a single table or between the Resources and ResourceContainers tables.\nwhere Show resources that contain storage\n\nThere's a default limit of three join and three mv-expand operators in a single Resource Graph SDK query. You can request an increase in these limits for your tenant through Help + support.\n\nTo support the Open Query portal experience, Azure Resource Graph Explorer has a higher global limit than Resource Graph SDK.\n\nNote\n\nYou can't reference a table as right table multiple times, which exceeds the limit of 1. If you do so, you would receive an error with code DisallowedMaxNumberOfRemoteTables.\n\nQuery scope\n\nThe scope of the subscriptions or management groups from which resources are returned by a query defaults to a list of subscriptions based on the context of the authorized user. If a management group or a subscription list isn't defined, the query scope is all resources, and includes Azure Lighthouse delegated resources.\n\nThe list of subscriptions or management groups to query can be manually defined to change the scope of the results. For example, the REST API managementGroups property takes the management group ID, which is different from the name of the management group. When managementGroups is specified, resources from the first 10,000 subscriptions in or under the specified management group hierarchy are included. managementGroups can't be used at the same time as subscriptions.\n\nExample: Query all resources within the hierarchy of the management group named My Management Group with ID myMG.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-03-01\n\n\nRequest Body\n\n{\n  \"query\": \"Resources | summarize count()\",\n  \"managementGroups\": [\"myMG\"]\n}\n\n\nThe AuthorizationScopeFilter parameter enables you to list Azure Policy assignments and Azure role-based access control (Azure RBAC) role assignments in the AuthorizationResources table that are inherited from upper scopes. The AuthorizationScopeFilter parameter accepts the following values for the PolicyResources and AuthorizationResources tables:\n\nAtScopeAndBelow (default if not specified): Returns assignments for the given scope and all child scopes.\nAtScopeAndAbove: Returns assignments for the given scope and all parent scopes, but not child scopes.\nAtScopeAboveAndBelow: Returns assignments for the given scope, all parent scopes, and all child scopes.\nAtScopeExact: Returns assignments only for the given scope; no parent or child scopes are included.\n\nNote\n\nTo use the AuthorizationScopeFilter parameter, be sure to use the 2021-06-01-preview or later API version in your requests.\n\nExample: Get all policy assignments at the myMG management group and Tenant Root (parent) scopes.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-06-01-preview\n\n\nRequest Body Sample\n\n{\n  \"options\": {\n    \"authorizationScopeFilter\": \"AtScopeAndAbove\"\n  },\n  \"query\": \"PolicyResources | where type =~ 'Microsoft.Authorization/PolicyAssignments'\",\n  \"managementGroups\": [\"myMG\"]\n}\n\n\nExample: Get all policy assignments at the mySubscriptionId subscription, management group, and Tenant Root scopes.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-06-01-preview\n\n\nRequest Body Sample\n\n{\n  \"options\": {\n    \"authorizationScopeFilter\": \"AtScopeAndAbove\"\n  },\n  \"query\": \"PolicyResources | where type =~ 'Microsoft.Authorization/PolicyAssignments'\",\n  \"subscriptions\": [\"mySubscriptionId\"]\n}\n\n\nEscape characters\n\nSome property names, such as those that include a . or $, must be wrapped or escaped in the query or the property name is interpreted incorrectly and doesn't provide the expected results.\n\nDot (.): Wrap the property name ['propertyname.withaperiod'] using brackets.\n\nExample query that wraps the property odata.type:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.['odata.type']\n\n\nDollar sign ($): Escape the character in the property name. The escape character used depends on the shell that runs Resource Graph.\n\nBash: Use a backslash (\\) as the escape character.\n\nExample query that escapes the property $type in Bash:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.\\$type\n\n\ncmd: Don't escape the dollar sign ($) character.\n\nPowerShell: Use a backtick (`) as the escape character.\n\nExample query that escapes the property $type in PowerShell:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.`$type\n\n\nNext steps\n\nAzure Resource Graph query language Starter queries and Advanced queries.\nLearn more about how to explore Azure resources.",
    "link": "https://learn.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language",
    "snippet": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL).",
    "title": "Understanding the Azure Resource Graph query language - Microsoft"
  }
]
s4 llm_format success 2026-03-01 22:28:35 → 2026-03-01 22:28:56
Input (117397 bytes)
[
  {
    "content_readable": "What is Huginn?\n\nHuginn is a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf. Huginn's Agents create and consume events, propagating them along a directed graph. Think of it as a hackable version of IFTTT or Zapier on your own server. You always know who has your data. You do.\n\nHere are some of the things that you can do with Huginn:\n\nTrack the weather and get an email when it's going to rain (or snow) tomorrow (\"Don't forget your umbrella!\")\nList terms that you care about and receive email when their occurrence on Twitter changes. (For example, want to know when something interesting has happened in the world of Machine Learning? Huginn will watch the term \"machine learning\" on Twitter and tell you when there is a spike in discussion.)\nWatch for air travel or shopping deals\nFollow your project names on Twitter and get updates when people mention them\nScrape websites and receive email when they change\nConnect to Adioso, HipChat, FTP, IMAP, Jabber, JIRA, MQTT, nextbus, Pushbullet, Pushover, RSS, Bash, Slack, StubHub, translation APIs, Twilio, Twitter, and Weibo, to name a few.\nSend digest email with things that you care about at specific times during the day\nTrack counts of high frequency events and send an SMS within moments when they spike, such as the term \"san francisco emergency\"\nSend and receive WebHooks\nRun custom JavaScript or CoffeeScript functions\nTrack your location over time\nCreate Amazon Mechanical Turk workflows as the inputs, or outputs, of agents (the Amazon Turk Agent is called the \"HumanTaskAgent\"). For example: \"Once a day, ask 5 people for a funny cat photo; send the results to 5 more people to be rated; send the top-rated photo to 5 people for a funny caption; send to 5 final people to rate for funniest caption; finally, post the best captioned photo on my blog.\"\n\nJoin us in our Gitter room to discuss the project.\n\nJoin us!\n\nWant to help with Huginn? All contributions are encouraged! You could make UI improvements, add new Agents, write documentation and tutorials, or try tackling issues tagged with #\"help wanted\". Please fork, add specs, and send pull requests!\n\nHave an awesome idea but not feeling quite up to contributing yet? Head over to our Official 'suggest an agent' thread and tell us!\n\nExamples\n\nPlease checkout the Huginn Introductory Screencast!\n\nAnd now, some example screenshots. Below them are instructions to get you started.\n\nGetting Started\n\nDocker\n\nThe quickest and easiest way to check out Huginn is to use the official Docker image. Have a look at the documentation.\n\nLocal Installation\n\nIf you just want to play around, you can simply fork this repository, then perform the following steps:\n\nRun git remote add upstream https://github.com/huginn/huginn.git to add the main repository as a remote for your fork.\nCopy .env.example to .env (cp .env.example .env) and edit .env, at least updating the APP_SECRET_TOKEN variable.\nMake sure that you have MySQL or PostgreSQL installed. (On a Mac, the easiest way is with Homebrew. If you're going to use PostgreSQL, you'll need to prepend all commands below with DATABASE_ADAPTER=postgresql.)\nRun bundle to install dependencies\nRun bundle exec rake db:create, bundle exec rake db:migrate, and then bundle exec rake db:seed to create a development database with some example Agents.\nRun bundle exec foreman start, visit http://localhost:3000/, and login with the username of admin and the password of password.\nSetup some Agents!\nRead the wiki for usage examples and to get started making new Agents.\nPeriodically run git fetch upstream and then git checkout master \u0026\u0026 git merge upstream/master to merge in the newest version of Huginn.\n\nNote: By default, email messages are intercepted in the development Rails environment, which is what you just setup. You can view them at http://localhost:3000/letter_opener. If you'd like to send real email via SMTP when playing with Huginn locally, set SEND_EMAIL_IN_DEVELOPMENT to true in your .env file.\n\nIf you need more detailed instructions, see the Novice setup guide.\n\nDevelop\n\nAll agents have specs! And there's also acceptance tests that simulate running Huginn in a headless browser.\n\nInstall PhantomJS 2.1.1 or greater:\n\nUsing Node Package Manager: npm install phantomjs\nUsing Homebrew on OSX brew install phantomjs\nRun all specs with bundle exec rspec\nRun a specific spec with bundle exec rspec path/to/specific/test_spec.rb.\nRead more about rspec for rails here.\n\nUsing Huginn Agent gems\n\nHuginn Agents can now be written as external gems and be added to your Huginn installation with the ADDITIONAL_GEMS environment variable. See the Additional Agent gems section of .env.example for more information.\n\nIf you'd like to write your own Huginn Agent Gem, please see huginn_agent.\n\nOur general intention is to encourage complex and specific Agents to be written as Gems, while continuing to add new general-purpose Agents to the core Huginn repository.\n\nDeployment\n\nPlease see the Huginn Wiki for detailed deployment strategies for different providers.\n\nHeroku\n\nTry Huginn on Heroku: (Takes a few minutes to setup. Read the documentation while you are waiting and be sure to click 'View it' after launch!) Huginn launches only on a paid subscription plan for Heroku. For non-experimental use, we strongly recommend Heroku's 1GB paid plan or our Docker container.\n\nOpenShift\n\nOpenShift Online\n\nTry Huginn on OpenShift Online\n\nCreate a new app with either mysql or postgres:\n\noc new-app -f https://raw.githubusercontent.com/huginn/huginn/master/openshift/templates/huginn-mysql.json\n\nor\n\noc new-app -f https://raw.githubusercontent.com/huginn/huginn/master/openshift/templates/huginn-postgresql.json\n\nNote: You can also use the web console to import either json file by going to \"Add to Project\" -\u003e \"Import YAML/JSON\".\n\nIf you are on the Starter plan, make sure to follow the guide to remove any existing application.\n\nThe templates should work on a v3 installation or the current v4 online.\n\nManual installation on any server\n\nHave a look at the installation guide.\n\nOptional Setup\n\nSetup for private development\n\nSee private development instructions on the wiki.\n\nEnable the WeatherAgent\n\nIn order to use the WeatherAgent you need an Weather Data API key from Pirate Weather. Sign up for one and then change the value of api_key: your-key in your seeded WeatherAgent.\n\nDisable SSL\n\nWe assume your deployment will run over SSL. This is a very good idea! However, if you wish to turn this off, you'll probably need to edit config/initializers/devise.rb and modify the line containing config.rememberable_options = { :secure =\u003e true }. You will also need to edit config/environments/production.rb and modify the value of config.force_ssl.\n\nLicense\n\nHuginn is provided under the MIT License.\n\nHuginn was originally created by @cantino in 2013. Since then, many people's dedicated contributions have made it what it is today.\n\n",
    "link": "https://github.com/huginn/huginn",
    "snippet": "Huginn is a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf.",
    "title": "huginn/huginn: Create agents that monitor and act on your ..."
  },
  {
    "content_readable": "Huginn is an open source web automation tool. It enables users to create agents, which function like programs, for tasks such as website monitoring, data retrieval, and online service interaction. These agents can be configured to respond to web changes or trigger specific actions, providing a solution for automating tasks and staying informed about online activities\n\nLogin\n\nOn your first visit to the site, you will be presented with the login/signup screen.\n\nWhen your instance is first created, an account is created for you with the email you chose. You can get the password for this account by going to your Elestio dashboard and clicking on the \"Show Password\" button.\n\nEnter your email, name and password and click the \"Login\" button\n\nCreating New Agent\n\nAgent is a fundamental building block that performs a specific task or action. It can be thought of as a software component that carries out automated actions based on predefined rules and triggers. Agents can perform a wide range of tasks, such as fetching data from APIs, monitoring websites, sending notifications, and more. Each agent is configured with its own set of options and parameters to define its behavior. You can create agent by clicking on the \"New Agent\" button.\n\nCreating New Scenario\n\nScenario is a sequence of events and agents that work together to perform a specific task or automate a workflow. It represents a set of actions and conditions that are executed in a predefined order. Scenarios in Huginn allow you to define complex workflows by connecting agents and specifying the flow of data between them. Each scenario can have multiple agents and events, and they can be triggered by various conditions or time intervals. You can create new scenario by clicking on the \"New Scenario\" button.\n\nCreating New Credential\n\nCredentials are used to securely store and manage sensitive information, such as API keys, passwords, and access tokens. Credentials can be created and associated with agents to provide them with the necessary authentication details to interact with external services or APIs. This allows agents to securely access and retrieve data from various sources without exposing sensitive information in the agent configuration. You can create new credentials by clicking on the \"New Credential\" button.\n\nEvents\n\nEvents are a key concept used to trigger actions and automate workflows. They represent specific occurrences or conditions that can initiate the execution of agents within a scenario. Events can be based on various triggers, such as receiving an HTTP request, a specific time interval, changes in data, or external API calls. When an event is triggered, it can pass data to the connected agents, allowing them to perform actions based on the event's context. You can check events in the \"Events\" section.\n\nBackground Jobs\n\nJobs are background tasks that are executed asynchronously. They can be used to perform long-running or resource-intensive operations without blocking the main execution flow. Jobs can be scheduled to run at specific intervals or triggered by events. They are commonly used for tasks such as data processing, API calls, and sending notifications. The status and details of jobs can be monitored in the \"Jobs\" section of the Huginn user interface.\n\nCreating New User\n\nUser are individuals who has registered an account and has access to the Huginn user interface. Users can log in to Huginn, create and manage agents, scenarios, credentials, events, and jobs. They can configure and customize their Huginn instance according to their specific needs. Users can also monitor the status and details of their agents, scenarios, and jobs through the user interface. You can create new user by clicking on the \"New User\" button.\n\n",
    "link": "https://elest.io/open-source/huginn/resources/quickstart",
    "snippet": "Agents can perform a wide range of tasks, such as fetching data from APIs, monitoring websites, sending notifications, and more. Each agent is configured with ...",
    "title": "Huginn - Quickstart | Elest.io"
  },
  {
    "content_readable": "I’ve been playing around a lot with Huginn, which is a service that allows you to run “agents” for automation. It is similar to IFTTT.\n\nA lot of people see Huginn, and think it’s cool, but don’t know what to do with it. I didn’t really either when I first heard of the project a few years ago. Hopefully you can get some ideas from this blog.\n\nI have previously written about my “On This Day” software, which was a webserver I handcrafted. It ran a bunch of scripts to pull in data from various sources, and would create a daily digest of the items. Since I put this together fairly quickly several years ago, it runs usually without issue. But it’s annoying to extend, and annoying to debug. I wanted to try to migrate the functionaility to huginn.\n\nHuginn has a bit of a learning curve. At first, I was having a hard time understanding the agent configuration options, and I couldn’t even figure out how to use the HTTP request agent (called the “Website Agent”). With some light reading of the source code, I figured everything out. Currently, my “On This Day” scenario features 14 agents.\n\n7 “source” agents. Most of these are the “Website agents” which scrape some data from the web, and parse the information I want from it. Several of these are for comics, and they simply output the image source for a comic image. One of the agents is a JavaScript agent, which runs a script I wrote that generates a different result based on the date.\n\n5 formatting agents, which take in random JSON input, and normalize it. Each source has a different input, but the outputs all just have a “message” and a “type”.\n\n1 digest agent, which takes in the normalized JSON, and combines it into one HTML template. This listens for events over the course of a day, and then when scheduled, it outputs the result of the templating.\n\n1 “data output” agent, which basically just means an RSS feed output. For every new input, this adds an item to the feed.\n\nSo far, this solution works really well. It’s really easy to add new things. A few of the odd data sources from “On This Day” use JavaScript to fill in functionality gaps. For my journal data entry, which gets entries from historic journals, I ended up making my own API service. I finished most of this API in an evening, and adding it into huginn was really simple.\n\nFor fun, I also added a “reblog” feed. I connected an Webhook agent to my RSS reader, Miniflux. I can “save” stories in Miniflux, and their links will get added to a daily list of “reblogged” items. I used a manual agent as well so that I can add links that don’t originate from Miniflux (i.e. if I just find something elsewhere on the web). You can follow this feed here.\n\nI also have been using huginn for my personal tracking stuff. I previously had a bunch of cronjobs running on a raspberry pi. It collected data from my Airgradient arduino, and used a weather API. Some of this I’ve moved fully to Huginn, which will massively simplify all of the configuration.\n\nOne benefit of having all of this is huginn is that it is much easier to set it all up again on a new server. As much as possible, everything is together in one place, rather than spread around in cronjobs on various nodes. I’ve been putting everything in Docker and exposing it via Traefik. It’s very simple to set this all up.\n\nOne downside of Huginn is that it’s been hogging a lot of memory on my VPS. This isn’t a huge deal, as I was already using a very minimal server. Additionally, as I mentioned before, it does take some time to figure out everything. I’m still not 100% sure of all of the settings, but functionally I was able to get a lot out with minimal tinkering. I also was dissapointed in the Docker setup, which doesn’t have as much documentation for as I expected.\n\nOverall, I really enjoy Huginn! It’s taken a lot of the scripts I’ve written over the last few years, and simplies their deployment and configuration. It’s so much easier to update them, and I can do things I never before attempted.",
    "link": "https://marks.kitchen/blog/huginn/",
    "snippet": "I've been playing around a lot with Huginn, which is a service that allows you to run “agents” for automation. It is similar to IFTTT.",
    "title": "An Introduction to Huginn - Mark's Kitchen"
  },
  {
    "content_readable": "As developers, we don’t have the time or patience for routine tasks. We like to get things done, and any tools that can help us automate are high on our radar.\n\nEnter Huginn, a workflow automation server similar to Zapier or IFTTT, but open source. With Huginn you can automate tasks such as watching for air travel deals, continually watching for certain topics on Twitter, or scanning for sensitive data in your code.\n\nRecently a post about Huginn hit the top of Hacker News. This piqued my interest, so I wanted to see why it's so popular, what it's all about, and what it's being used for.\n\nHow Huginn Started\n\nI reached out to Huginn's creator, Andrew Cantino, to ask him why he started it.\n\n\"I started the project in 2013 to scratch my own itch—I wanted to scrape some websites to know when they changed (web comics, movie trailers, local weather forecasts, Craigslist sales, eBay, etc.) and I wanted to be able to automate simple reactions to those changes. I'd been interested in personal automation for a while and Huginn was initially a quick project I built over the Christmas holidays that year.\"\n\nHowever, that simple Christmas-holiday project quickly grew.\n\nToday, Huginn is a community-driven project with hundreds of contributors and thousands of users. Andrew still uses Huginn for its original use case:\n\n\"I still primarily use Huginn for this purpose: it tells me about upcoming yard sales, if I should bring an umbrella today because of rain in the forecast, when rarely-updated blogs have changed, when certain words spike on Twitter, etc. I also have found it very useful for sourcing information for the weekly newsletter that I write about the space industry, called The Orbital Index.\"\n\nHowever, the community has found a wider range of uses. So let's look at exactly what Huginn is, how to set it up, and how to use it to automate your everyday life.\n\nHow Huginn Works\n\nHuginn is a web-based scheduling service that runs workers called Agents. Each Agent performs a specific function, such as sending an email or requesting a website. Agents generate and consume JSON payloads called events, which can be used to chain Agents together. Agents can be scheduled, or executed manually.\n\nGetting Started\n\nIt's easy to deploy Huginn with just one click using the Deploy to Heroku button. Huginn also supports Docker and Docker Compose, manual installation on Linux, and many other deployment methods. After installing, you can extend Huginn by using one of the many available Agent Gems, or by creating your own.\n\nOnce you've deployed Huginn and have logged in (check your specific setup for the URL), creating a new Agent is simple, as seen in this screen shot. This Agent follows a Twitter stream in real time.\n\nHere's an existing Agent that pulls the latest comic from xkcd.com. You can see the basic stats of the Agent (last checked, last created, and so on). The Options field shows how the Agent is configured, including the CSS selectors used to extract data from the page.\n\nScenarios\n\nYou can also organize Agents into Scenarios, which allows you to group similar Agents as well as import and export Agent configurations as JSON files. You can also fine-tune Agent scheduling and configuration using special Agents called Controllers. Here we see a Scenario build around the theme of \"Entertainment.\"\n\nDynamic Content\n\nLastly, Huginn uses the Liquid templating engine, which allows you to load dynamic content into Agents. This is commonly used to store configuration data (such as credentials) separately from Agents.\n\nHere, it's used to format the URL, title, and on-hover text from the XKCD Source Agent as HTML:\n\nWhy Would I Use Huginn?\n\nIn addition to web scraping, Huginn supports a wide variety of actions that can allow for some truly complex workflows. Disclaimer: Many sites disallow automated web scraping. Be sure to check the terms of service (TOS) of any website you intend to access using Huginn.\n\nSome of the examples from the GitHub page include:\n\nWatch for air travel or shopping deals\nFollow your project names on Twitter and get updates when people mention them Connect to Adioso, HipChat, Basecamp, Growl, FTP, IMAP, Jabber, JIRA, MQTT, nextbus, Pushbullet, Pushover, RSS, Bash, Slack, StubHub, translation APIs, Twilio, Twitter, Wunderground, and Weibo, to name a few.\nSend digest emails with things that you care about at specific times during the day\nTrack counts of high frequency events and send an SMS within moments when they spike\nSend and receive WebHooks\nRun custom JavaScript or CoffeeScript functions\nTrack your location over time\nCreate Amazon Mechanical Turk workflows as the inputs, or outputs, of agents (the Amazon Turk Agent is called the \"HumanTaskAgent\"). For example: \"Once a day, ask 5 people for a funny cat photo; send the results to 5 more people to be rated; send the top-rated photo to 5 people for a funny caption; send to 5 final people to rate for funniest caption; finally, post the best captioned photo on my blog.\"\n\nLet's look at a few of these use cases in detail.\n\nCurated Feeds\n\nUsing the Website Agent, you can fetch the latest contents of multiple web pages, filter and aggregate the results, then send the final contents to yourself as an email. The default Scenario demonstrates this by fetching the latest XKCD comic. This creates an event containing the comic title, URL, and on-hover text, which are rendered as HTML via an Event Formatting Agent. Another Website Agent simultaneously gets the latest movie trailers from iTunes, then both events are merged into an Email Digest Agent that fires each afternoon:\n\nMonitoring Social Networks\n\nHuginn supports several social networks including Twitter and Tumblr. These Agents can watch for new posts, trending topics, and updates from other users.\n\nLet’s say you live in a hurricane-prone area and want to follow the impact of a storm. Using a Twitter Stream Agent, you can watch for Tweets containing “hurricane,” “storm,” and so on, and pass the results to a Peak Detector Agent. This counts Tweets over a period of time, measures the standard deviation, and fires an event if it detects an outlier. You can have this event trigger an Email Agent that notifies you immediately. Andrew Cantino explains this use case in more detail on his blog.\n\nPrice Shopping\n\nHuginn makes an excellent online shopping tool. When shopping for the best deal, create Website Agents to run daily searches on discount and trading sites. Use Event Formatting Agents to extract prices, then use a Change Detector Agent to compare the last retrieved price to the current price. If it’s lower, you can extract the item URL and send it straight to your inbox.\n\nSecurity Alerts\n\nStaying on top of security updates is a continuous process. You can use Huginn to watch the National Vulnerability Database for CVEs affecting your systems and notify you immediately. If you want to filter the results (e.g. only show high-priority alerts), you can use a Trigger Agent to only allow results where the severity is above a certain value.\n\nAdvanced Use Cases\n\nHuginn comes with some powerful Agents that greatly extend its capabilities beyond web scraping.\n\nData Processing and Validation\n\nHuginn can read files stored on the host, making it a useful data processing tool. Let's say you're testing changes to a codebase, and before you commit, you want to scan for any sensitive data that you might have left in during testing. You can create a Local File Agent to scan your project directory, pass the contents to an Event Formatting Agent, and use regular expressions to detect credentials, passwords, and similar strings. Alternatively, you could use a Shell Command Agent to call a utility like repo-supervisor and fire a desktop notification when it detects matches.\n\nNewsroom Automation\n\nOne of Huginn’s first great successes was its adoption by the New York Times to automate newsroom tasks. During the 2014 Winter Olympics, Huginn monitored their data pipeline availability and sent notifications when medals were awarded. Huginn also notified reporters when new stories published and updated a Slack channel when content changed on nytimes.com. You can learn more about their use cases at Huginn for Newsrooms.\n\nConclusion\n\nHuginn is a deceptively simple tool with a lot of flexibility. The best way to see what it can do is to try it yourself. To learn more, visit https://github.com/huginn/huginn.",
    "link": "https://dev.to/heroku/huginn-an-open-source-self-hosted-ifttt-5hd6",
    "snippet": "Each Agent performs a specific function, such as sending an email or requesting a website. Agents generate and consume JSON payloads called ...",
    "title": "Huginn: An Open-Source, Self-Hosted IFTTT"
  },
  {
    "content_readable": "",
    "link": "https://medium.com/@VirtualAdept/huginn-writing-a-simple-agent-network-97c63c492334",
    "snippet": "This agent network will run every half hour, poll a REST API endpoint, and e-mail you what it gets. You'll have to have an already running Huginn instance.",
    "title": "Huginn: Writing a simple agent network"
  },
  {
    "content_readable": "Skip to content\n\nNavigation Menu\n\n{{ message }}\n\nOverview\nRepositories\nProjects\nPackages\nPeople\n\nPopular repositories Loading\n\nCreate agents that monitor and act on your behalf. Your agents are standing by!\n\nRuby 48.8k 4.2k\n\nBase for creating new Huginn Agents as Gems\n\nRuby 128 50\n\nTests for the Huginn docker images\n\nRuby 5 10\n\nRepositories\nType\nSelect type\n\nAll Public Sources Forks Archived Mirrors Templates\nLanguage\nSelect language\n\nAll Ruby\nSort\nSelect order\n\nLast updated Name Stars\n\nShowing 6 of 6 repositories\n\nhuginn Public\n\nCreate agents that monitor and act on your behalf. Your agents are standing by!\n\nhuginn/huginn’s past year of commit activity\n\nhuginn/omniauth-dropbox-oauth2’s past year of commit activity\n\nRuby 4 45 0 1\n\nUpdated Nov 17, 2024\n\nhuginn_agent Public\n\nBase for creating new Huginn Agents as Gems\n\nhuginn/huginn_agent’s past year of commit activity\n\nRuby 128\n\nMIT 50 3 2\n\nUpdated Oct 28, 2024\n\nhuginn/huginn_docker_specs’s past year of commit activity\n\nRuby 5 10 0 1\n\nUpdated Apr 12, 2023\n\nhuginn/delayed_job_active_record’s past year of commit activity\n\nRuby 1\n\nMIT 344 0 0\n\nUpdated Jan 15, 2023\n\nhuginn/tumblr_client’s past year of commit activity\n\nRuby 2\n\nApache-2.0 137 0 0\n\nUpdated Jul 21, 2020\n",
    "link": "https://github.com/huginn",
    "snippet": "Huginn. Create agents that monitor and act on your behalf. Your agents are standing by!",
    "title": "Huginn - Create agents that monitor and act on your behalf"
  },
  {
    "content_readable": "Huginn\n\nCreate agents that monitor and act on your behalf. Your agents are standing by!\n\nVisit Website Reviews\n\nWhat is Huginn?\n\nHuginn is a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf. Huginn's Agents create and consume events, propagating them along a directed graph. Think of it as a hackable version of IFTTT or Zapier on your own server. You always know who has your data. You do.\n\nHere are some of the things that you can do with Huginn:\n\nTrack the weather and get an email when it's going to rain (or snow) tomorrow (\"Don't forget your umbrella!\")\nList terms that you care about and receive email when their occurrence on Twitter changes. (For example, want to know when something interesting has happened in the world of Machine Learning? Huginn will watch the term \"machine learning\" on Twitter and tell you when there is a spike in discussion.)\nWatch for air travel or shopping deals\nFollow your project names on Twitter and get updates when people mention them\nScrape websites and receive email when they change\nConnect to Adioso, HipChat, FTP, IMAP, Jabber, JIRA, MQTT, nextbus, Pushbullet, Pushover, RSS, Bash, Slack, StubHub, translation APIs, Twilio, Twitter, and Weibo, to name a few.\nSend digest email with things that you care about at specific times during the day\nTrack counts of high frequency events and send an SMS within moments when they spike, such as the term \"san francisco emergency\"\nSend and receive WebHooks\nRun custom JavaScript or CoffeeScript functions\nTrack your location over time\nCreate Amazon Mechanical Turk workflows as the inputs, or outputs, of agents (the Amazon Turk Agent is called the \"HumanTaskAgent\"). For example: \"Once a day, ask 5 people for a funny cat photo; send the results to 5 more people to be rated; send the top-rated photo to 5 people for a funny caption; send to 5 final people to rate for funniest caption; finally, post the best captioned photo on my blog.\"\n\nHuginn Reviews\n\nHuginn doesn't have enough reviews yet!\n\nHuginn details\n\nFree\n\nCategories",
    "link": "https://productivity.directory/huginn",
    "snippet": "Huginn is a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf.",
    "title": "Huginn Review 2025 - Features, Pricing, Hacks and Tips"
  },
  {
    "content_readable": "whoa there, pardner!\n\nReddit's awesome and all, but you may have a bit of a problem. We've seen far too many requests come from your IP address recently.\n\nPlease wait a few minutes and try again.\n\nIf you're still getting this error after a few minutes and think that we've incorrectly blocked you or you would like to discuss easier ways to get the data you want, please contact us at this email address.\n\nYou can read Reddit's Terms of Service here.\n\nWhen contacting us, please include your Reddit account along with the following code:\n\n019cab4d-76d0-7f1a-81cd-1b1d8c350f87",
    "link": "https://www.reddit.com/r/selfhosted/comments/fmky18/huginn_agent_mageathread/",
    "snippet": "It allows you to create \"agents\" which are like little bots that do tasks for you. Each agent is sort of like a \"function\" in programming.",
    "title": "Huginn Agent Mageathread! : r/selfhosted - Reddit"
  },
  {
    "content_readable": "This is part one of the Advanced Use Cases series:\n\n1️⃣ Extract Metadata from Queries to Improve Retrieval\n\n2️⃣ Query Expansion\n\n3️⃣ Query Decomposition\n\n4️⃣ Automated Metadata Enrichment\n\nSometimes a single question is multiple questions in disguise. For example: “Did Microsoft or Google make more money last year?”. To get to the correct answer for this seemingly simple question, we actually have to break it down: “How much money did Google make last year?” and “How much money did Microsoft make last year?”. Only if we know the answer to these 2 questions can we reason about the final answer.\n\nThis is where query decomposition comes in. This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach:\n\nDecompose the original question into smaller questions that can be answered independently to each other. Let’s call these ‘sub questions’ here on out.\nReason about the final answer to the original question, based on each sub-answer.\n\nWhile for many query/dataset combinations, this may not be required, for some, it very well may be. At the end of the day, often one query results in one retrieval step. If within that one single retrieval step we are unable to have the retriever return both the money Microsoft made last year and Google, then the system will struggle to produce an accurate final response.\n\nThis method ensures that we are:\n\nretrieving the relevant context for each sub question.\nreasoning about the final answer given each answer based on the contexts retrieved for each sub question.\n\nIn this article, I’ll be going through some key steps that allow you to achieve this. You can find the full working example and code in the linked recipe from our cookbook. Here, I’ll only show the most relevant parts of the code.\n\n🚀 I’m sneaking something extra into this article. I saw the opportunity to try out the structured output functionality (currently in beta) by OpenAI to create this example. For this step, I extended the OpenAIGenerator in Haystack to be able to work with Pydantic schemas. More on this in the next step.\n\nLet’s try build a full pipeline that makes use of query decomposition and reasoning. We’ll use a dataset about Game of Thrones (a classic for Haystack) which you can find preprocessed and chunked on Tuana/game-of-thrones on Hugging Face Datasets.\n\nDefining our Questions Structure\n\nOur first step is to create a structure within which we can contain the subquestions, and each of their answers. This will be used by our OpenAIGenerator to produce a structured output.\n\nfrom pydantic import BaseModel\n\nclass Question(BaseModel):\n    question: str\n    answer: Optional[str] = None\n\nclass Questions(BaseModel):\n    questions: list[Question]\n\n\nThe structure is simple, we have Questions made up of a list of Question. Each Question has the question string as well as an optional answer to that question.\n\nDefining the Prompt for Query Decomposition\n\nNext up, we need to get an LLM to decompose a question and produce multiple questions. Here, we will start making use of our Questions schema.\n\nsplitter_prompt = \"\"\"\nYou are a helpful assistant that prepares queries that will be sent to a search component.\nSometimes, these queries are very complex.\nYour job is to simplify complex queries into multiple queries that can be answered\nin isolation to eachother.\n\nIf the query is simple, then keep it as it is.\nExamples\n1. Query: Did Microsoft or Google make more money last year?\n   Decomposed Questions: [Question(question='How much profit did Microsoft make last year?', answer=None), Question(question='How much profit did Google make last year?', answer=None)]\n2. Query: What is the capital of France?\n   Decomposed Questions: [Question(question='What is the capital of France?', answer=None)]\n3. Query: {{question}}\n   Decomposed Questions:\n\"\"\"\n\nbuilder = PromptBuilder(splitter_prompt)\nllm = OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions})\n\n\nAnswering Each Sub Question\n\nFirst, let’s build a pipeline that uses the splitter_prompt to decompose our question:\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question}})\nprint(result[\"llm\"][\"structured_reply\"])\n\n\nThis produces the following Questions (List[Question])\n\nquestions=[Question(question='How many siblings does Jamie have?', answer=None), \n           Question(question='How many siblings does Sansa have?', answer=None)]\n\n\nNow, we have to fill in the answer fields. For this step, we need to have a separate prompt and two custom components:\n\nThe CohereMultiTextEmbedder which can take multiple questions rather than a single one like the CohereTextEmbedder.\nThe MultiQueryInMemoryEmbeddingRetriever which can again, take multiple questions and their embeddings, returning question_context_pairs. Each pair contains the question and documents that are relevant to that question.\n\nNext, we need to construct a prompt that can instruct a model to answer each subquestion:\n\nmulti_query_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nAnd you have split the task into the following questions:\n{% for pair in question_context_pairs %}\n  {{pair.question}}\n{% endfor %}\n\nHere are the question and context pairs for each question.\nFor each question, generate the question answer pair as a structured output\n{% for pair in question_context_pairs %}\n  Question: {{pair.question}}\n  Context: {{pair.documents}}\n{% endfor %}\nAnswers:\n\"\"\"\n\nmulti_query_prompt = PromptBuilder(multi_query_template)\n\n\nLet’s build a pipeline that can answer each individual sub question. We will call this the query_decomposition_pipeline :\n\nquery_decomposition_pipeline = Pipeline()\n\nquery_decomposition_pipeline.add_component(\"prompt\", PromptBuilder(splitter_prompt))\nquery_decomposition_pipeline.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\nquery_decomposition_pipeline.add_component(\"embedder\", CohereMultiTextEmbedder(model=\"embed-multilingual-v3.0\"))\nquery_decomposition_pipeline.add_component(\"multi_query_retriever\", MultiQueryInMemoryEmbeddingRetriever(InMemoryEmbeddingRetriever(document_store=document_store)))\nquery_decomposition_pipeline.add_component(\"multi_query_prompt\", PromptBuilder(multi_query_template))\nquery_decomposition_pipeline.add_component(\"query_resolver_llm\", OpenAIGenerator(model=\"gpt-4o-mini\", generation_kwargs={\"response_format\": Questions}))\n\nquery_decomposition_pipeline.connect(\"prompt\", \"llm\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"embedder.embeddings\", \"multi_query_retriever.query_embeddings\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"multi_query_retriever.queries\")\nquery_decomposition_pipeline.connect(\"llm.structured_reply\", \"embedder.questions\")\nquery_decomposition_pipeline.connect(\"multi_query_retriever.question_context_pairs\", \"multi_query_prompt.question_context_pairs\")\nquery_decomposition_pipeline.connect(\"multi_query_prompt\", \"query_resolver_llm\")\n\n\nRunning this pipeline with the original question “Who has more siblings, Jamie or Sansa?”, results in the following structured output:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question}})\n\nprint(result[\"query_resolver_llm\"][\"structured_reply\"])\n\n\nquestions=[Question(question='How many siblings does Jamie have?', answer='2 (Cersei Lannister, Tyrion Lannister)'),\n           Question(question='How many siblings does Sansa have?', answer='5 (Robb Stark, Arya Stark, Bran Stark, Rickon Stark, Jon Snow)')]\n\n\nReasoning About the Final Answer\n\nThe final step we have to take is to reason about the ultimate answer to the original question. Again, we create a prompt that will instruct an LLM to do this. Given we have the questions output that contains each sub question and answer, we will make these inputs to this final prompt.\n\nreasoning_template = \"\"\"\nYou are a helpful assistant that can answer complex queries.\nHere is the original question you were asked: {{question}}\n\nYou have split this question up into simpler questions that can be answered in\nisolation.\nHere are the questions and answers that you've generated\n{% for pair in question_answer_pair %}\n  {{pair}}\n{% endfor %}\n\nReason about the final answer to the original query based on these questions and\naswers\nFinal Answer:\n\"\"\"\n\nresoning_prompt = PromptBuilder(reasoning_template)\n\n\nTo be able to augment this prompt with the question answer pairs, we will have to extend our previous pipeline and connect the structured_reply from the previous LLM, to the question_answer_pair input of this prompt.\n\nquery_decomposition_pipeline.add_component(\"reasoning_prompt\", PromptBuilder(reasoning_template))\nquery_decomposition_pipeline.add_component(\"reasoning_llm\", OpenAIGenerator(model=\"gpt-4o-mini\"))\n\nquery_decomposition_pipeline.connect(\"query_resolver_llm.structured_reply\", \"reasoning_prompt.question_answer_pair\")\nquery_decomposition_pipeline.connect(\"reasoning_prompt\", \"reasoning_llm\")\n\n\nNow, let’s run this final pipeline and see what results we get:\n\nquestion = \"Who has more siblings, Jamie or Sansa?\"\nresult = query_decomposition_pipeline.run({\"prompt\":{\"question\": question},\n                                           \"multi_query_prompt\": {\"question\": question},\n                                           \"reasoning_prompt\": {\"question\": question}},\n                                           include_outputs_from=[\"query_resolver_llm\"])\n\nprint(\"The original query was split and resolved:\\n\")\n\nfor pair in result[\"query_resolver_llm\"][\"structured_reply\"].questions:\n  print(pair)\nprint(\"\\nSo the original query is answered as follows:\\n\")\nprint(result[\"reasoning_llm\"][\"replies\"][0])\n\n\n🥁 Drum roll please:\n\nThe original query was split and resolved:\n\nquestion='How many siblings does Jaime have?' answer='Jaime has one sister (Cersei) and one younger brother (Tyrion), making a total of 2 siblings.'\nquestion='How many siblings does Sansa have?' answer='Sansa has five siblings: one older brother (Robb), one younger sister (Arya), and two younger brothers (Bran and Rickon), as well as one older illegitimate half-brother (Jon Snow).'\n\nSo the original query is answered as follows:\n\nTo determine who has more siblings between Jaime and Sansa, we need to compare the number of siblings each has based on the provided answers.\n\nFrom the answers:\n- Jaime has 2 siblings (Cersei and Tyrion).\n- Sansa has 5 siblings (Robb, Arya, Bran, Rickon, and Jon Snow).\n\nSince Sansa has 5 siblings and Jaime has 2 siblings, we can conclude that Sansa has more siblings than Jaime.\n\nFinal Answer: Sansa has more siblings than Jaime.\n\n\nWrapping up\n\nGiven the right instructions, LLMs are good at breaking down tasks. Query decomposition is a great way we can make sure we do that for questions that are multiple questions in disguise.\n\nIn this article, you learned how to implement this technique with a twist 🙂 Let us know what you think about using structured outputs for these sorts of use cases. And check out the Haystack experimental repo to see what new features we’re working on.",
    "link": "https://haystack.deepset.ai/blog/query-decomposition",
    "snippet": "This is a technique for retrieval augmented generation (RAG) based AI applications that follows a simple approach.",
    "title": "Advanced RAG: Query Decomposition \u0026 Reasoning - Haystack"
  },
  {
    "content_readable": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and relative date parameters that are recognized in search queries.\n\nSeveral references on this page are not available in Simple Search. Switch to Advanced Search to access them.\n\nIssue Attributes\n\nEvery issue has base attributes that are set automatically by YouTrack. These include the issue ID, the user who created or applied the last update to the issue, and so on.\n\nThese search attributes represent an \u003cAttribute\u003e in the Search Query Grammar. Their values correspond to the \u003cValue\u003e or \u003cValueRange\u003e parameter.\n\nAttribute-based search uses the syntax attribute: value.\n\nYou can specify multiple values for the target attribute, separated by commas.\n\nExclude specific values from the search results with the syntax attribute: -value.\n\nIn many cases, you can omit the attribute and reference values directly with the # or - symbols. For additional guidelines, see Advanced Search.\n\nattachment text\n\nattachment text: \u003ctext\u003e\n\nReturns issues that include image attachments with the specified text.\n\nattachments\n\nattachments: \u003ctext\u003e\n\nReturns issues that include attachments with the specified filename.\n\nBoard\n\nBoard \u003cboard name\u003e: \u003csprint name\u003e\n\nReturns issues that are assigned to the specified sprint on the specified agile board. To find issues that are assigned to agile boards with sprints disabled, use has: \u003cboard name\u003e.\n\ncc recipients\n\ncc recipients: \u003cuser\u003e\n\nReturns tickets where the specified users are added as CCs.\n\ncode\n\ncode: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words that are formatted as code in the issue description or comments. This includes matches that are formatted as inline code spans, indented and fenced code blocks, and stack traces.\n\ncommented: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues to which comments were added on the specified date or within the specified period.\n\ncommenter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were commented by the specified user or by a member of the specified group.\n\ncomments: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in a comment.\n\ncreated\n\ncreated: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that were created on a specific date or within a specified time frame.\n\ndescription\n\ndescription: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in the issue description.\n\ndocument type\n\ndocument type: Issue | Ticket\n\nReturns either issue or ticket type documents.\n\nGantt\n\nGantt: \u003cchart name\u003e\n\nReturns issues that are assigned to the specified Gantt chart.\n\nhas\n\nhas: \u003cattribute\u003e\n\nThe has keyword functions as a Boolean search term. When used in a search query, it returns all issues that contain a value for the specified attribute. Use the minus operator (-) before the specified attribute to find issues that have empty values.\n\nFor example, to find all issues in the TST project that are assigned to the current user, have a duplicates link, have attachments, but do not have any comments, enter in: TST for: me has: duplicates , attachments , -comments.\n\nYou can use the has keyword in combination with the following attributes:\n\nAttribute\n\nDescription\n\nattachments\n\nReturns issues that have attachments.\n\nboards\n\nReturns issues that are assigned to at least one agile board. When used with an exclusion operator (-), returns issues that aren't assigned to any boards.\n\nBoard \u003cboard name\u003e\n\nReturns issues that are assigned to the specified agile board.\n\ncomments\n\nReturns issues that have one or more comments.\n\ndescription\n\nReturns issues that do not have an empty description.\n\n\u003cfield name\u003e\n\nReturns issues that contain any value in the specified custom field. Enclose field names that contain spaces in braces.\n\nGantt\n\nReturns issues that are assigned to any Gantt chart.\n\n\u003clink type name\u003e\n\nReturns issues that have links that match the specified outward name or inward name. Enclose link names that contain spaces in braces.\n\nFor example, to find issues that are linked as subtasks to parent issues, use:\n\nhas: {Subtask of}\n\nTo find issues that aren't linked to a parent issue, use:\n\nhas: -{Subtask of}\n\nlinks\n\nReturns issues that have any issue link type.\n\nstar\n\nReturns issues that have the star tag for the current user.\n\nunderestimation\n\nReturns issues where the total spent time is greater than the original estimation value.\n\nvcs changes\n\nReturns issues that contain vcs changes.\n\nvotes\n\nReturns issues that have one or more votes.\n\nwork\n\nReturns issues that have one or more work items.\n\nissue ID\n\nissue ID: \u003cissue ID\u003e, #\u003cissue ID\u003e\n\nReturns an issue that matches the specified issue ID. This attribute can also be referenced as a single value with the syntax #\u003cissue ID\u003e or -\u003cissue ID\u003e. When the search returns a single issue, the result is displayed in single issue view.\n\nIf you don't use the syntax for an attribute-based search (issue ID: \u003cvalue\u003e or #\u003cvalue\u003e), the input is also parsed as a text search. In addition to any issue that matches the specified issue ID, the search results include any issue that contains the specified ID in any text attribute.\n\nIf you set the issue ID in quotes, the input is only parsed as a text search. The search results only include issues that contain the specified ID in a text attribute.\n\nNote that even when an issue ID is parsed as a text search, the results do not include issue links. To find issues based on issue links, use the links attribute or reference a specific link type.\n\nlinks\n\nlinks: \u003cissue ID\u003e\n\nReturns all issues that contain links to the specified issue.\n\nlooks like\n\nlooks like: \u003cissue ID\u003e\n\nReturns issues in which the issue summary or description contains words that are found in the issue summary or description in the specified issue. Issues that contain matching words in the issue summary are given higher weight when the search results are sorted by relevance.\n\nmentioned in\n\nmentioned in: \u003cissue id\u003e\n\nReturns issues with issue IDs referenced in the description or a comment of the target issue. Issue IDs in supplemental text fields aren't included in the search results.\n\nmentions\n\nmentions: \u003cissue id\u003e, \u003cuser\u003e\n\nReturns issues that contain either @mention for the specified user or issue IDs referenced in the description or a comment. User mentions and issue IDs in supplemental text fields aren't included in the search results.\n\norganization\n\norganization: \u003corganization name\u003e\n\nReturns issues that belong to the specified organization. This attribute can also be referenced as a single value.\n\nproject\n\nproject: \u003cproject name\u003e | \u003cproject ID\u003e\n\nReturns issues that belong to the specified project. This attribute can also be referenced as a single value.\n\nreporter\n\nreporter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues and tickets that were created by the specified user or a member of the specified group, including tickets created on behalf of the specified user. Use me to return issues that were created by the current user.\n\nresolved date\n\nresolved date: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that were resolved on a specific date or within a specified time frame.\n\nsaved search\n\nsaved search: \u003csaved search name\u003e\n\nReturns issues that match the search criteria of a saved search. This attribute can also be referenced as a single value with the syntax #\u003csaved search name\u003e or -\u003csaved search name\u003e.\n\nsubmitter\n\nsubmitter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were submitted by the specified user or a member of the specified group on behalf of another user. Use me to return issues that were submitted by the current user.\n\nsummary\n\nsummary: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or words in the issue summary.\n\ntag\n\ntag: \u003ctag name\u003e\n\nReturns issues that match a specified tag. This attribute can also be referenced as a single value with the syntax #\u003ctag name\u003e or -\u003ctag name\u003e\n\nupdated\n\nupdated: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues where the most recent change occurred on a specific date or within a specified time frame.\n\nupdater\n\nupdater: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that were last updated by the specified user or a member of the specified group. Use me to return issues to which you applied the last update.\n\nvcs changes\n\nvcs changes: \u003ccommit hash\u003e\n\nReturns issues that contain vcs changes that were applied in the commit object that is identified by the specified SHA-1 commit hash.\n\nvisible to\n\nvisible to: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that are visible to the specified user or a member of the specified group.\n\nvoter\n\nvoter: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns issues that have votes from the specified user or a member of the specified group.\n\nCustom Fields\n\nYou can find issues that are assigned specific values in a custom field. As with other issue attributes, you use the syntax attribute: value or attribute: -value. In this case, the attribute is the name of the custom field. In most cases, you can reference values directly with the # or - symbols.\n\nFor custom fields that are assigned an empty value, you can reference this property as a value. For example, to search for issues that are not assigned to a specific user, enter Assignee: Unassigned or #Unassigned. If the field is not assigned an empty value, find issues that do not store a value in the field with the syntax \u003cfield name\u003e: {No \u003cfield name\u003e} or has: -\u003cfield name\u003e.\n\nThis section lists the search attributes for default custom fields. Note that default fields and their values can be customized. The actual field names, values, and aliases may vary.\n\nAffected versions\n\nAffected versions: \u003cvalue\u003e\n\nReturns issues that were detected in a specific version of the product.\n\nAssignee\n\nAssignee: \u003cuser\u003e | \u003cgroup\u003e\n\nReturns all issues that are assigned to the specified user or a member of the specified group.\n\nFix versions\n\nFix versions: \u003cvalue\u003e\n\nReturns issues that were fixed in a specific version of the product.\n\nFixed in build\n\nFixed in build: \u003cvalue\u003e\n\nReturns issues that were fixed in the specified build.\n\nPriority\n\nPriority: \u003cvalue\u003e\n\nReturns issues that match the specified priority level.\n\nState\n\nState: \u003cvalue\u003e | Resolved | Unresolved\n\nReturns issues that match the specified state.\n\nThe Resolved and Unresolved states cannot be assigned to an issue directly, as they are properties of specific values that are stored in the State field.\n\nBy default, Fixed, Won't fix, Duplicate, Incomplete, Obsolete, and Can't reproduce states are set as Resolved.\n\nThe Submitted, Open, In Progress, Reopened, and To be discussed states are set as Unresolved.\n\nSubsystem\n\nSubsystem: \u003cvalue\u003e\n\nReturns issues that are assigned to a specific subsystem within a project.\n\nType\n\nType: \u003cvalue\u003e\n\nReturns issues that match the specified issue type.\n\nIssue Links\n\nYou can search for issues based on the links that connect them to other issues. Search queries that reference a specific issue link type can be interpreted in two different ways:\n\nWhen specified as \u003clink type\u003e: \u003cissue ID\u003e, the query returns issues linked to the specified issue using this link type.\n\nUsing \u003clink type\u003e: (\u003csub-query\u003e), the query returns issues linked to any issue that matches the specified sub-query using this link type.\n\nWhen searching for linked issues, you can enter the outward name or inward name of any issue link type, then specify your search criteria.\n\nThis list contains search parameters for issue link types that are provided by default in YouTrack. The default issue link types can be customized, which means that the actual names may vary. You can also use this syntax to build search queries that refer to custom link types.\n\nlinks\n\nlinks: \u003cissue ID\u003e\n\nReturns issues that are linked to a target issue.\n\naggregate\n\naggregate \u003caggregation link type\u003e: \u003cissue ID\u003e\n\nReturns issues that are indirectly linked to a target issue. Use this search term to find, for example, issues that are parent issues for a parent issue or subtasks of issues that are also subtasks of a target issue. The results include any issue that is linked to the target issue using the specified link type, whether directly or indirectly.\n\nThis search argument is only compatible with aggregation link types.\n\nDepends on\n\nDepends on: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have depends on links to a target issue or any issue that matches the specified sub-query.\n\nDuplicates\n\nDuplicates: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have duplicates links to a target issue or any issue that matches the specified sub-query.\n\nIs duplicated by\n\nIs duplicated by: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have is duplicated by links to a target issue or any issue that matches the specified sub-query.\n\nIs required for\n\nIs required for: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have is required for links to a target issue or any issue that matches the specified sub-query.\n\nParent for\n\nParent for: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have parent for links to a target issue or any issue that matches the specified sub-query.\n\nRelates to\n\nRelates to: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have relates to links to a target issue or any issue that matches the specified sub-query.\n\nSubtask of\n\nSubtask of: \u003cissue ID\u003e | (\u003csub-query\u003e)\n\nReturns issues that have subtask of links to a target issue or any issue that matches the specified sub-query.\n\nTime Tracking\n\nThere is a dedicated set of search attributes that you can use to find issues that contain time tracking data. These attributes look for specific values that have been added as work items to an issue.\n\nwork\n\nwork: \u003ctext\u003e\n\nReturns issues that contain word forms that match the specified word or phrase in a work item.\n\nwork author: \u003cuser\u003e\n\nReturns issues that have work items that were added by the specified user.\n\nwork type\n\nwork type: \u003cvalue\u003e\n\nReturns issues that have work items that are assigned the specified work type. The query work type: {No type} returns issues that have work items that are not assigned a work item type.\n\nwork date\n\nwork date: \u003cdate\u003e | \u003cperiod\u003e\n\nReturns issues that have work items that are recorded for the specified date or within the specified time frame.\n\ncustom work item attributes\n\nwork \u003cattribute name\u003e: \u003cattribute value\u003e\n\nReturns issues that have work items that are assigned the specified value for a specific work item attribute.\n\nSort Attributes\n\nYou can specify the sort order for the list of issues that are returned by the search query.\n\nYou can sort issues by any of the attributes on the following list. In the Search Query Grammar, these attributes represent the \u003cSortAttribute\u003e value.\n\nsort by\n\nsort by: \u003cvalue\u003e \u003csort order\u003e\n\nSorts issues that are returned by the query in the specified order.\n\nWhen you perform a text search, the results can be sorted by relevance. You cannot specify relevance as a sort attribute. For more information, see Sorting by Relevance.\n\nKeywords\n\nThere are a number of values that can be substituted with a keyword. When you use a keyword in a search query, you do not specify an attribute. A keyword is preceded by the number sign (#) or the minus operator. In the YouTrack Search Query Grammar, these keywords correspond to a \u003cSingleValue\u003e.\n\nme\n\nReferences the current user. This keyword can be used as a value for any attribute that accepts a user.\n\nWhen used as a single value (#me) the search returns issues that are assigned to, reported by, or commented by the current user.\n\nFor example, to find unresolved issues that are assigned to, reported by, or contain comments from the current user, enter #me -Resolved.\n\nThe results also include issues that contain references to the current user in any custom field that stores values as users. For example, you have a custom field Reviewed by that stores a user type. The search query #me -Resolved also includes issues that reference the current user in this custom field.\n\nmy\n\nAn alias for me.\n\nResolved\n\nThis keyword references the Resolved issue property. This property is set based on the current value or combination of values for any custom field that stores a state type. In the default State field, the Resolved property is enabled for the values Fixed, Won't fix, Duplicate, Incomplete, Obsolete, and Can't reproduce.\n\nFor projects that use multiple state-type fields, the Resolved property is only true when all the state-type fields are assigned values that are considered to be resolved.\n\nFor example, to find all resolved issues that were updated today, enter #Resolved updated: Today.\n\nUnresolved\n\nThis keyword references the Unresolved issue property. This property is set based on the current value or combination of values for any custom field that stores a state type. In the default State field, the Resolved property is disabled for the values Submitted, Open, In Progress, Reopened, and To be discussed.\n\nFor projects that use multiple state-type fields, the Unresolved property is true when any state-type field is assigned a value that is not considered to be resolved.\n\nFor example, to find all unresolved issues that are assigned to the user john.doe in the Test project, enter #Unresolved project: Test for: john.doe.\n\nReleased\n\nThis keyword references the Released property for values in a field that stores a version type. It can only be used together with the attribute name or alias for a version field. This means that it cannot be referenced as a single value.\n\nWith fields that store multiple values, the search query returns issues for which at least one of the versions that are stored in the field is marked as released.\n\nFor example, to find all issues in the Test project that are fixed in a version that has not yet been released, enter in: Test fixed in: -Released.\n\nArchived\n\nThis keyword references the Archived property for values in a field that stores a version type. It can only be used together with the attribute name or alias for a version field. This means that it cannot be referenced as a single value.\n\nWith fields that store multiple values, the search query only returns issues for which all the versions that are stored in the field are marked as archived.\n\nFor example, to find all issues in the Test project that are fixed in a version that has been archived, enter in: Test fixed in: Archived.\n\nOperators\n\nThe search query grammar applies default semantics to search queries that do not contain explicit logical operators.\n\nSearches that specify values for multiple attributes are treated as conjunctive. This means that the values are handled as if joined by an AND operator. For example, State: {In Progress} Priority: Critical returns issues that are assigned the specified state and priority.\n\nThis extends to queries that look for the presence or absence of a value for a specific attribute (has) in combination with a reverence to a specific value for the same attribute. The presence or absence of a value and the value itself are considered as separate attributes in the issue. For example, has: assignee Assignee: me only returns issues where the assignee is set and that assignee is you.\n\nFor text search, searches that include multiple words are treated as conjunctive. This means that the words are handled as if joined by an AND operator. For example, State: Open context usage returns issues that contain matching forms for both context and usage.\n\nSearches that include multiple values for a single attribute are treated as disjunctive. This means that the values are handled as if joined by an OR operator. For example, State: {In Progress}, {To be discussed} returns issues that are assigned either one or the other of these two states.\n\nYou can override the default semantics by applying explicit operators to the query.\n\nand\n\nThe AND operator combines matches for multiple search attributes to narrow down the search results. When you join search arguments with the AND operator, the resulting issues must contain matches for all the specified attributes. Use this operator for issue fields that store enum[*] types and tags.\n\nSearch arguments that are joined with an AND operator are always processed as a group and have a higher priority than other arguments that are joined with an OR operator in the query.\n\nHere are a few examples of search queries that contain AND operators:\n\nTo find issues in the Ktor project that are tagged as both Next build and to be tested, enter:\n\nin: Ktor and tag: {Next build} and tag: {to be tested}\n\nThe AND operator between the two tags ensures that the results only contain issues that have both tags.\n\nTo find all issues that are set as Critical priority in the Ktor project or are set as Major priority and are assigned to you in the Kotlin project, enter:\n\nin: Ktor #Critical or in: Kotlin #Major and for: me\n\nIf you were to remove the operators in this query, the references to the project and priority are parsed as disjunctive (OR) statements. The reference to the assignee (me) is then joined with a conjunctive (AND) statement. Instead of getting critical issues in the Ktor project plus a list of major-priority issues that you are assigned in Kotlin, you would only issues that are assigned to you that are either major or critical in either Ktor or Kotlin.\n\nor\n\nThe OR operator combines matches for multiple search attribute to broaden the search results.\n\nThis is very useful when searching for a term which has a synonym that might be used in an issue instead. For example, a search for lesson OR tutorial returns issues that contain matching forms for either \"lesson\" or \"tutorial\". If you remove the OR operator from the query, the search is performed conjunctively, which means the result would only include issues that contain matching forms for both words.\n\nHere's another example of a search query that contains an OR operator:\n\nTo find all issues in the Ktor project that are assigned to you or are tagged as to be tested in any project, enter:\n\nin: Ktor for: me or tag: {to be tested}\n\nParentheses\n\nUsing parentheses ( and ) combines various search arguments to change the order in which the attributes and operators are processed. The part of a search query inside the parentheses has priority and is always processed as a single unit.\n\nThe most common use of parentheses is to enclose two search arguments that are separated by an OR operator and further restrict the search results by joining additional search arguments with AND operators.\n\nAny time you use parentheses in a search query, you need to provide all the operators that join the parenthetical statement to neighboring search arguments. For example, the search query in: Kotlin #Critical (in: Ktor and for:me) cannot be processed. It must be written as in: Kotlin #Critical or (in: Ktor and for:me) instead.\n\nHere's an example of a search query that uses parentheses:\n\nTo find all issues that are assigned to you and are either assigned Critical priority in the Kotlin project or are assigned Major priority in the Ktor project, enter:\n\n(in: Kotlin #Critical or in: Ktor #Major) and for: me\n\nSymbols\n\nThe following symbols can be used to extend or refine a search query.\n\nSymbol\n\nDescription\n\nExamples\n\n-\n\nExcludes a subset from a set of search query results. When you use this symbol with a single value, do not use the number sign.\n\nTo find all unresolved issues except for issues with minor priority and sort the list of results by priority in ascending order, enter #unresolved -minor sort by: priority asc.\n\n#\n\nIndicates that the input represents a single value.\n\nTo find all unresolved issues in the MRK project that were reported by, assigned to, or commented by the current user, enter #my #unresolved in: MRK.\n\n,\n\nSeparates a list of values for a single attribute. Can be used in combination with a range.\n\nTo find all issues assigned to, reported or commented by the current user, which were created today or yesterday, enter #my created: Today, Yesterday.\n\n..\n\nDefines a range of values. Insert this symbol between the values that define the upper and lower ranges. The search results include the upper and lower bounds.\n\nTo find all issues fixed in version 1.2.1 and in all versions from 1.3 to 1.5, enter fixed in: 1.2.1, 1.3 .. 1.5.\n\nTo find all issues created between March 10 and March 13, 2018, enter created: 2018-03-10 .. 2018-03-13.\n\n*\n\nWildcard character. Its behavior is context-dependent.\n\nWhen used with the .. symbol, substitutes a value that determines the upper or lower bound in a range search. The search results are inclusive of the specified bound.\n\nWhen used in an attribute-based search, matches zero or more characters at the end of an attribute value. For more information, see Wildcards in Attribute-based Search.\n\nWhen used in text search, matches zero or more characters in a string. For more information, see Wildcards in Text Search.\n\nTo find all issues created on or before March 10, 2018, enter created: * .. 2018-03-10\n\nTo find issues that have tags that start with refactoring, enter tag: refactoring*.\n\nTo find unresolved issues that contain image attachments in PNG format, enter #Unresolved attachments: *.png.\n\n?\n\nMatches any single character in a string. You can only use this wildcard to search in attributes that store text. For more information, see Wildcards in Text Search.\n\nTo find issues that contain the words \"prioritize\" or \"prioritise\" in the issue description, enter description: prioriti?e\n\n{ }\n\nEncloses attribute values that contain spaces.\n\nTo find all issues with the Fixed state that have the tag to be tested, enter #Fixed tag: {to be tested}.\n\nDate and Period Values\n\nSeveral search attributes reference values that are stored as a date. You can search for dates as single values or use a range of values to define a period.\n\nSpecify dates in the format: YYYY-MM-DD or YYYY-MM or MM-DD. You also can specify a time in 24h format: HH:MM:SS or HH:MM. To specify both date and time, use the format: YYYY-MM-DD}}T{{HH:MM:SS. For example, the search query created: 2010-01-01T12:00 .. 2010-01-01T15:00 returns all issues that were created on 1 January 2010 between 12:00 and 15:00.\n\nPredefined Relative Date Parameters\n\nYou can also use pre-defined relative parameters to search for date values. The values for these parameters are calculated relative to the current date according to the time zone of the current user. The actual value for each parameter is shown in the query assist panel.\n\nThe following relative date parameters are supported:\n\nParameter\n\nDescription\n\nNow\n\nThe current instant.\n\nToday\n\nThe current calendar day.\n\nTomorrow\n\nThe next calendar day.\n\nYesterday\n\nThe previous calendar day.\n\nSunday\n\nThe calendar Sunday for the current week.\n\nMonday\n\nThe calendar Monday for the current week.\n\nTuesday\n\nThe calendar Tuesday for the current week.\n\nWednesday\n\nThe calendar Wednesday for the current week.\n\nThursday\n\nThe calendar Thursday for the current week.\n\nFriday\n\nThe calendar Friday for the current week.\n\nSaturday\n\nThe calendar Saturday for the current week.\n\n{Last working day}\n\nThe most recent working day as defined by the Workdays that are configured in the settings on the Time Tracking page in YouTrack.\n\n{This week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the current week.\n\n{Last week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the previous week.\n\n{Next week}\n\nThe period from 00:00 Monday to 23:59 Sunday for the next week.\n\n{Two weeks ago}\n\nThe period from 00:00 Monday to 23:59 Sunday for the calendar week two weeks prior to the current date.\n\n{Three weeks ago}\n\nThe period from 00:00 Monday to 23:59 Sunday for the calendar week three weeks prior to the current date.\n\n{This month}\n\nThe period from the first day to the last day of the current calendar month.\n\n{Last month}\n\nThe period from the first day to the last day of the previous calendar month.\n\n{Next month}\n\nThe period from the first day to the last day of the next calendar month.\n\nOlder\n\nThe period from 1 January 1970 to the last day of the month two months prior to the current date.\n\nCustom Date Parameters\n\nIf the predefined date parameters don't help you find issues that matter most to you, define your own date range in your search query. Here are a few examples of the queries you can write with custom date parameters:\n\nFind issues that have new comments added in the last seven days:\n\ncommented: {minus 7d} .. Today\n\nFind issues that were updated in the last two hours:\n\nupdated: {minus 2h} .. *\n\nFind unresolved issues that are at least one and a half years old:\n\ncreated: * .. {minus 1y 6M} #Unresolved\n\nFind issues that are due in five days:\n\nDue Date: {plus 5d}\n\nTo define a custom time frame in your search queries, use the following syntax:\n\nTo specify dates or times in the past, use minus.\n\nTo specify dates or times in the future, use plus.\n\nSpecify the time frame as a series of whole numbers followed by a letter that represents the unit of time. Separate each unit of time with a space character. For example:\n\n2y 3M 1w 2d 12h\n\nQueries that specify hours will filter for events that took place during the specified hour. For example, if it is currently 15:35, a query that is written as created: {minus 48h} returns issues that were created two days ago, at any time between 3 and 4 PM. Meanwhile, a query that is written as created: {minus 2d} returns all issues that were created two days ago at any time between midnight and 23:59.\n\nThis level of precision only applies to hours. A query that references the unit of time as 14d returns exactly the same results as 2w.\n\nSearch queries that specify units of time shorter than one hour (minutes, seconds) are not supported.\n\nSearch Query Grammar\n\nThis page provides a BNF description of the YouTrack search query grammar.\n\n\u003cSearchRequest\u003e ::= \u003cOrExpression\u003e \u003cOrExpession\u003e ::= \u003cAndExpression\u003e ('or' \u003cAndExpression\u003e)* \u003cAndExpression\u003e ::= \u003cAndOperand\u003e ('and' \u003cAndOperand\u003e)* \u003cAndOperand\u003e ::= '('\u003cOrExpression\u003e? ')' | Term \u003cTerm\u003e ::= \u003cTermItem\u003e* \u003cTermItem\u003e ::= \u003cQuotedText\u003e | \u003cNegativeText\u003e | \u003cPositiveSingleValue\u003e | \u003cNegativeSingleValue\u003e | \u003cSort\u003e | \u003cHas\u003e | \u003cCategorizedFilter\u003e | \u003cText\u003e \u003cCategorizedFilter\u003e ::= \u003cAttribute\u003e ':' \u003cAttributeFilter\u003e (',' \u003cAttributeFilter\u003e)* \u003cAttribute\u003e ::= \u003cname of issue field\u003e \u003cAttributeFilter\u003e ::= ('-'? \u003cValue\u003e ) | ('-'? \u003cValueRange\u003e) | \u003cLinkedIssuesQuery\u003e \u003cLinkedIssuesQuery\u003e ::= ( \u003cOrExpression\u003e ) \u003cValueRange\u003e ::= \u003cValue\u003e '..' \u003cValue\u003e \u003cPositiveSingleValue\u003e ::= '#'\u003cSingleValue\u003e \u003cNegativeSingleValue\u003e ::= '-'\u003cSingleValue\u003e \u003cSingleValue\u003e ::= \u003cValue\u003e \u003cSort\u003e ::= 'sort by:' \u003cSortField\u003e (',' \u003cSortField\u003e)* \u003cSortField\u003e ::= \u003cSortAttribute\u003e ('asc' | 'desc')? \u003cHas\u003e ::= 'has:' \u003cAttribute\u003e (',' \u003cAttribute\u003e)* \u003cQuotedText\u003e ::= '\"' \u003ctext without quotes\u003e '\"' \u003cNegativeText\u003e ::= '-' \u003cQuotedText\u003e \u003cText\u003e ::= \u003ctext without parentheses\u003e \u003cValue\u003e ::= \u003cComplexValue\u003e | \u003cSimpleValue\u003e \u003cSimpleValue\u003e ::= \u003cvalue without spaces\u003e \u003cComplexValue\u003e ::= '{' \u003cvalue (can have spaces)\u003e '}'\n\nGrammar is case-insensitive.\n\nFor a complete list of search attributes, see Issue Attributes.\n\nTo see sample queries for common use cases, see Sample Search Queries.\n\n11 November 2025",
    "link": "https://www.jetbrains.com/help/youtrack/cloud/search-and-command-attributes.html",
    "snippet": "This page provides a list of attributes and keywords that are used in YouTrack query language. You'll also find a complete list of operators, symbols, and ...",
    "title": "Search Query Reference | YouTrack Cloud Documentation - JetBrains"
  },
  {
    "content_readable": "Introduced in 2020, the GitHub user profile README allow individuals to give a long-form introduction. This multi-part tutorial explains how I setup my own profile to create dynamic content to aid discovery of my projects:\n\nwith the Liquid template engine and Shields (Part 1 of 4)\nusing GitHub's GraphQL API to query dynamic data about all my repos (keep reading below)\nfetching RSS and Social cards from third-party sites (Part 3 of 4)\nautomating updates with GitHub Actions (Part 4 of 4)\n\nYou can visit github.com/j12y to see the final result of what I came up with for my own profile page.\n\nThe GitHub Repo Gallery\n\nThe intended behavior for my repo gallery is to create something similar to pinned repositories but with a bit more visual pizzazz to identify what the projects are about.\n\nIn addition to source code, the repo can have metadata associated with it:\n\n✔️ Name of the repository\n✔️ Short description of the project\n✔️ Programming language used for the project\n✔️ List of tags / topics\n✔️ Image that can be used for social cards\n\nAbout\n\nThe About has editable fields to set the description and topics.\n\nSettings\n\nThe Settings includes a place to upload an image for social media preview cards.\n\nIf you don't set a preview card image, GitHub will generate one automatically that includes some basic profile statistics and your user profile image.\n\nGetting Started with the GitHub REST API\n\nThe way I structured this project is to build a library of any functions related to querying GitHub in src/gh.ts. I used a .env file to store my personal access (classic) token for authentication during local development.\n\n├── package.json\n├── .env\n├── src\n│   ├── app.ts\n│   ├── gh.ts\n│   └── template\n│       ├── README.liquid\n│       ├── contact.liquid\n│       └── gallery.liquid\n└── tsconfig.json\n\n\nI started by using REST endpoints with the Octokit library and TypeScript bindings.\n\n// src/gh.ts\nimport { Octokit } from 'octokit';\nimport { RestEndpointMethodTypes } from '@octokit/plugin-rest-endpoint-methods'\nconst octokit = new Octokit({ auth: process.env.TOKEN});\n\nexport class GitHub {\n    // GET /users/{user}\n    // https://docs.github.com/en/rest/users/users#get-a-user\n    async getUserDetails(user: string): Promise\u003cRestEndpointMethodTypes['users']['getByUsername']['response']['data']\u003e {\n        const { data } = await octokit.rest.users.getByUsername({\n            username: user\n        });\n\n        return data;\n    };\n}\n\n\nFrom src/app.ts I initialize the GithHub class, fetch the results, and can inspect the data being returned as a way to get comfortable with the various endpoints.\n\n// src/app.ts\nimport dotenv from 'dotenv';\nimport { GitHub } from \"./gh\";\n\nexport async function main() {\n  dotenv.config();\n  const gh = new GitHub()\n\n  const details = await gh.getUserDetails();\n  console.log(details);\n}\nmain();\n\n\nI typically get started on projects with simple tests like this to make sure all the various pieces to an integration can be configured and work together before getting too far.\n\nUse the GitHub GraphQL Endpoint\n\nTo get the data needed for the gallery layout, it would be necessary to make multiple calls to REST endpoints. In addition there is some data not yet available from the REST endpoint at all.\n\nSwitching to query using the GitHub GraphQL interface becomes helpful. This single endpoint can process a number of queries and give precise control over the data needed.\n\n💡 The GitHub GraphQL Explorer was fundamentally useful for me to get the right queries defined\n\nThis query needs authorization with the personal access token to fetch profile details about followers similar to some of the details returned from the REST endpoints.\n\n// src/gh.ts\n\nconst { graphql } = require(\"@octokit/graphql\")\n\nexport class GitHub \n    // https://docs.github.com/en/graphql\n    graphqlWithAuth = graphql.defaults({\n        headers: {\n            authorization: `token ${process.env.TOKEN}`\n        }\n    })\n\n    async getProfileOverview(name: string): Promise\u003cany\u003e {\n        const query = `\n            query getProfileOverview($name: String!) { \n                user(login: $name) { \n                    followers(first: 100) {\n                        totalCount\n                        edges {\n                            node {\n                                login\n                                name\n                                twitterUsername\n                                email\n                            }\n                        }\n                    }\n                }\n            }\n        `;\n        const params = {'name': name};\n\n        return await this.graphqlWithAuth(query, params);\n    }\n}\n\n\nThere are other resources such as Learn GraphQL if you haven't written many queries yet which explains the basics around syntax, schemas, and types.\n\nGetting used to GitHub's GraphQL schema primarily involves walking a series of edges to find linked nodes for objects of interest and their data attributes. In this case, I started by querying a user profile, finding the list of linked followers, and then inspecting their corresponding node's login, name, and email address.\n\n   ┌────────────┐\n   │    user    │\n   └─────┬──────┘\n         │\n         └──followers\n               │\n               ├─── totalCount\n               │\n               └─── edges\n                     │\n                     └── node\n\n\n\nFaceted Search by Topic Frequency\n\nI often want to find repositories by a topic. The user interface makes it easy to filter among many repositories by programming language such as python but unless you know which topics are relevant can become hit or miss. Was it nlp or nltk I used to categorize related repositories. Did I use dolby or dolbyio to identify repos I have for work projects.\n\nA faceted search that narrows down the number of matching repositories can be helpful for finding relevant projects like this. Given topics on GitHub are open-ended and not constrained to fixed values, it can be easy to accidentally categorize repos with variations like lambda and aws-lambda such that searches only identify partial results.\n\nTo address this, a GraphQL query gathering topics by frequency of usage within an organization or individual account can help with identifying the most useful topics.\n\nThe steps for this would be:\n\nQuery repository topics\nProcess results to group topics by frequency\nUse a template to render the gallery\n\n1 - Query Repository Topics\n\nI used the following GraphQL query to fetch my repositories and their corresponding topics.\n\nconst query = `\n    query getReposOverview($name: String!) {\n        user(login: $name) {\n            repositories(first: 100 ownerAffiliations: OWNER) {\n                edges {\n                    node {\n                        name\n                        url\n                        description\n                        openGraphImageUrl\n                        repositoryTopics(first: 100) {\n                            edges {\n                                node {\n                                    topic {\n                                        name\n                                    }\n                                }\n                            }\n                        }\n                        primaryLanguage {\n                            name\n                        }\n                    }\n                }\n            }\n        }\n    }\n`;\n\n\nThis query starts by filtering by user owned repositories (not counting forks) along with the metadata such as the social image.\n\n2 - Process Results and Group Topics by Frequency\n\nIterating over the results of the query the convention used was to look for anything with the topic github-gallery as something to be featured in the gallery. We also get a count of usage for each of the other topics and programming languages.\n\nvar topics: {[id: string]: number } = {};\nvar languages: {[id: string]: number } = {};\nvar gallery: {[id: string]: any } = {};\n\nconst repos = await gh.getReposOverview(user);\nfor (let repo of repos.user.repositories.edges) {\n  // Count occurrences of each topic\n  repo.node.repositoryTopics.edges.forEach((topic: any) =\u003e {\n    if (topic.node.topic.name == 'github-gallery') {\n      gallery[repo.node.name] = repo;\n    } else {\n      topics[topic.node.topic.name] = topic.node.topic.name in topics ? topics[topic.node.topic.name] + 1 : 1;\n    }\n  });\n\n  // Count and include count of language used\n  if (repo.node.primaryLanguage) {\n    languages[repo.node.primaryLanguage.name] = repo.node.primaryLanguage.name in languages ? languages[repo.node.primaryLanguage.name] + 1 : 1;\n  }\n}\n\n\n3 - Use a template to render the gallery\n\nThe topics are ordered by how often they are used. From the previous post on setting up a dynamic profile, I'm passing scope to the liquid engine for any data to be made available in a template.\n\n  // Share topics sorted by frequency of use for filtering repositories\n  // from the organization\n  scope['topics'] = Object.entries(topics).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n  scope['languages'] = Object.entries(languages).sort(function (first, second) {\n    return second[1] - first[1];\n  });\n\n  // Gather topics across repos\n  scope['gallery'] = Object.values(gallery);\n\n\n\nThe repository page on GitHub uses query parameters to sort and filter, so items like topic:nltk can be passed directly in the URL to load a filtered view of repositories. The shields create a nice looking button for navigating to the topic, and use of icons for programming languages helps find relevant code samples.\n\n\u003cp\u003eExplore some of my projects: \u003cbr/\u003e\n{% for language in languages %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=language%3A{{language[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ language[0] }}-{{ language[1] }}-lightgrey?logo={{ language[0] }}\u0026label={{ language[0] }}\u0026labelColor=000000\" alt=\"{{ language[0] }}\"/\u003e\u003c/a\u003e {% endfor %}\n{% for topic in topics %}\u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic[0]}}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/static/v1?label={{topic[0]}}\u0026message={{ topic[1] }}\u0026labelColor=blue\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\n\nThe presentation includes a 3-column row in a table for displaying the metadata about each featured gallery project. This could display all repositories, but limiting to one or two rows seems sensible for managing screen space.\n\n{% for tile in gallery limit:3 %}\n\u003ctd width=\"25%\" valign=\"top\" style=\"padding-top: 20px; padding-bottom: 20px; padding-left: 30px; padding-right: 30px;\"\u003e\n\u003ca href=\"{{ tile.node.url }}\"\u003e\u003cimg src=\"{{ tile.node.openGraphImageUrl }}\"/\u003e\u003c/a\u003e\n\u003cp\u003e\u003cb\u003e\u003ca href=\"{{ tile.node.url }}\"\u003e{{ tile.node.name }}\u003c/b\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e{{ tile.node.description }}\u003cbr/\u003e\n{% for topic in tile.node.repositoryTopics.edges %} \u003ca href=\"https://github.com/j12y?tab=repositories\u0026q=topic%3A{{topic.node.topic.name }}\u0026type=\u0026language=\u0026sort=\"\u003e\u003cimg src=\"https://img.shields.io/badge/{{ topic.node.topic.name | replace: \"-\", \"--\" }}-blue?style=pill\"/\u003e\u003c/a\u003e {% endfor %}\n\u003c/p\u003e\n\u003c/td\u003e\n{% endfor %}\n\n\nWith all of that put together, we now have a gallery that displays a picture along with the name, description, and tags. The picture can highlight a user interface, architectural diagram, or some other branded visual to help identify the purpose of the project visually.\n\nWe can also use this to maintain our list of topics and make finding relevant topics for an audience easier to discover.\n\nLearn more\n\nI hope this overview helps with getting yourself sorted. The next article will dive into some of the other ways of aggregating content.\n\nFetching RSS and Social Cards for GitHub Profile (Part 3 of 4)\nAutomating GitHub Profile Updates with Actions (Part 4 of 4)\n\nDid this help you get your own profile started? Let me know and follow to get notified about updates.",
    "link": "https://dev.to/j12y/query-github-repo-topics-using-graphql-35ha",
    "snippet": "Creating a customized user profile page for GitHub to showcase work projects and make navigation to relevant topics easier.",
    "title": "Query GitHub Repo Topics Using GraphQL - DEV Community"
  },
  {
    "content_readable": "Updated\n\n4 days ago\n\nWith millions of conversations happening all over the web each day, it can be a long and tedious task trying to get more relevant mentions and tighten the scope of your query, but with the help of Advanced Topic Query, it can be at your fingertips.\n\nIn Social Listening, you have the option to create an advanced query that is not limited to ANY, ALL, or NONE formatting of query building. Advanced query builder can be used to form complex text queries which are not possible with a normal query builder.\n\nWhat is an Advanced Topic Query?\n\nAdvanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more.\n\nBy using advanced query you can pinpoint relevant information which is not possible with basic topic query.\n\nIt gives you the power to find the needle in a haystack.\n\n​\n\nBasic Topic Query v/s Advanced Topic Query\n\nWith more operators to use you can fetch conversations by language, geography, social media channel, volume, author, #listening, @account monitoring, user segment, and much more, it can give you access to more actionable insights.\n\nIn Basic Query, you can only use boolean operators like OR/ NOT/ AND/ along with NEAR. On the other hand, in Advanced Topic Query, it gives you access to use OR with/ inside AND, NOT (nested and within operator use cases), advanced operators, exact match operators etc.\n\nLet's see the use cases where advanced query will help in getting more insightful mentions –\n\nUse case #1: To search \"pepsi\" OR \"drink\" along with \"cups\".\n\nBasic Query\n\nAdvancd Query\n\nUse case #2: To get mentions of \"pepsi\" along with \"coke\" or \"sprite\" but not \"miranda\" with people having \"follower count\" between 100 to 1000 on \"twitter\".\n\nBasic Query\n\nAdvanced Query\n\nNot feasible in the basic Topic query\n\nThis is where we need the advanced Topic query.​\n\nHow to create an advanced Topic query?\n\nClick the New Tab icon. Under Sprinklr Insights, Click Topics within Listening.\n\nOn the Topics window, click Add Topic in the top right corner. Fill in the required fields and click Create.\n\nIn the Setup Query tab of Create New Topic window, select Advanced Query in the query section.\n\n​\n\nType your query in the Advanced Query field with the required operators and syntax.\n\nClick Save.\n\nTip: While using Instagram as a Listening Source, be sure that your query keywords include hashtags.\n\nWhich operators to use for building Topic queries?\n\nOperators for Topic queries\n\nIn creation of advanced queries along with boolean operators OR/ AND/ NOT/ etc, Sprinklr also supports operator types –\n\nSearch Operators\n\nExact Match Operators\n\nOperators for Getting Post Replies/Comments​\n\nSprinklr provides its user edge by giving them power to use Keywords List inside advanced query along with Operators mentioned.\n\nCreate query using Topic query operators\n\nFollowing are some most used operator examples and their results –\n\nOperator\n\nExample\n\nResult\n\nhello\n\nSearch for the term \"hello\"\n\nsocial sprinklr\n\nSearch for the phrases \"social\" and \"sprinklr\"\n\n​\n\nNote: Using this will show preview but topic can not be saved as it will show error, Use \"Social Sprinklr\" or (Social AND/OR/ NOT/ NEAR Sprinklr) to eliminate error.\n\nAND\n\nsocial AND sprinklr\n\nSearch for \"social\" and \"sprinklr\" anywhere within the complete message, irrespective of keywords between them\n\nOR\n\nsocial OR sprinklr\n\nSearch for \"social\" or \"sprinklr\"\n\nNOT\n\n\"social media\" NOT \"facebook\"\n\nSearch for results that contain \"social media\" but not \"facebook\"\n\n~\n\n\"social media\"~10\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNEAR\n\nsocial NEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other\n\nNote: This operator can be used with keyword lists.\n\nONEAR\n\nsocial ONEAR/10 media\n\nSearch for \"social\" and \"media\" within 10 words of each other in an ordered way\n\nNote: This operator searches social ahead of media.\n\ntitle\n\ntitle: (\"social media\")\n\nSearch for social media in the title of the message\n\nNote: It is mostly used for News, blogs, reviews and other sites.\n\nauthor\n\nauthor: \"social_media\"\n\nFetches all the mentions from author name: social_media\n\nSome other operators which are supported by Sprinklr are –\n\nProximity: It is used to define proximity or distance between 2 keywords only, whereas, NEAR can be used to define proximity between two keywords as well as keyword lists.\n\nOnear (Ordered Near): It sets the order in which the keywords will appear. For example, Keyword-List1 ONEAR/10 Keyword-List2 will ensure keywords from Keyword-List1 appear first and then Keyword-List2 keywords will follow within space of maximum 10.\n\nStep by step guide to make advanced Topic query\n\nUse case\n\nTo write query fetching mentions of ZARA –\n\n​\n\n(# listening is used for instagram listening)\n\nGetting mention along with clothing or fashion related terms only –\n\nRemoving profanity from mention (use case specific) –\n\nRemoving profanity from mention (use case specific) –\n\nAs social media has lots of profane words you can also remove it by making a keyword list and negating it from query –\n\nFiltering Mentions in English –\n\n​\n\nApplying source input as Twitter –\n\nGetting mentions of those users which have followers between 100 to 1000 –\n\n​\n\nAdvanced example showcasing use of Topic query operators and keyword list –\n\nBest practices while using Advanced Query\n\nUse of Parentheses\n\n​Parentheses are not necessary to enclose a search query but can be useful while grouping operations together for more complex queries.\n\n​\n\nFor example, if you want to return results that mention Samsung or Apple phones, and also want to query content that mentions phones along with either Apple or Samsung, you could use parentheses around Apple and Samsung to group three keywords together, as shown below –\n\nphone AND (Apple OR Samsung)\n\n​\n\nUse of parentheses within brackets, is further explained below with an example –\n\n[(internet of things ~3) OR iot OR internetofthings) AND (robots OR robot OR #robot)] NOT [things]\n\nTip: You can also use parentheses within brackets to set off additional operations within the Advanced Query field. The end result should look similar to the result summary of a basic query, built using multiple operations within a single section.\n\n\nAs a part of the rest of the query, this will perform the following operations –\n\nSearch for posts that contain the phrase \"internet of things\" or \"#internetofthings\"\n\nFrom within those results, keep any result that also says \"robots\" or \"robot\" or \"#robot\" within three words (a proximity search) of either \"internet of things\" or \"iot\" or \"internetofthings\".\n\nDiscard any results that just have the phrase \"things\" within.\n\nParentheses nested within brackets intend to set off different operations as isolated processes. In the previous example, if you build an Advanced Query that states [(internet of things OR iot OR internet of things) AND (robots OR robot OR #robot)] your query will return results that contain ANY of the first three terms and the second three terms.\n\nHowever, if you build an Advanced Query that states [internet of things OR iot OR internet of things AND robots OR robot OR #robot], your query will return any result that contains the phrase \"internet of things\" or the word \"iot\" or the word \"robot\" or the hashtag #robot or specifically the phrase \"internet of things\" within the same message as the word \"robots\".\n\nNote:\n\nYou cannot use a \"NOT\" statement with an \"OR\" statement.\n\n\nExample:\n( social OR NOT media ) ❌\n( social NOT media ) ✅\n\n(( social OR ( media NOT facebook )) ✅\n\nWhy?\n\nQuery should not contain \"NOT\" terms in \"OR\" with other terms, \"NOT\" clauses should be used in \"AND\" with other terms, using \"NOT\" in \"OR\" will bring too much data.\n\nUse of Quotation marks\n\nQuotation marks can be used for phrases in which you are looking for an exact match of those particular words in a specific order. Using parentheses or quotation marks for single-word queries is not mandatory.\n\nUse straight quotation marks ( \" \" ) for outlining phrases within it. The use of curved quotation marks (“ ”) will not produce your desired results.\n\nParentheses are generally used to group keywords or phrases joined by one or more operators together, but with other keywords involved, parentheses and quotations would act differently. For example –\n\nVersion 1: \"Phil Schiller\" AND \"Apple Marketing\" will return results for content with the exact phrase Phil Schiller (or phil schiller) and the exact phrase Apple Marketing (or apple marketing).\n\nNote: Here exact does not mean case sensitive as in the case of exactMessage Operator.\n\nExample: exactMessage: (\"Phil Schiller\" AND \"Apple Marketing\"), which will fetch results for phrase Phil Schiller (not phil schiller) and the exact phrase Apple Marketing (not apple marketing).\n\n\nVersion 2: \"Phil Schiller\" AND (Apple OR Marketing) will return results for content with the phrase \"Phil Schiller\" (together) and at least one of the words, Apple or Marketing.\n\nHandling for Broad \u0026 Ambiguous Keywords\n\nIt is very important to not use/reduce use of broad keywords in advanced queries. Broader keywords will fetch mentions that are unrelated to topic of interest, and eventually hinder dashboard/insights\n\nFor all keywords used in an advanced topic query, ensure they are directly related to the topic of interest.\n\nIn case keywords are broad but relevant to topic, they should be tied to some relevant keywords related to that topic, by using NEAR Operators\n\nExample: Robot is an important keyword for Robot Company. However just using this keyword will fetch irrelevant keywords as it’s a broad keyword used for other entities as well (Robot Street, etc).\n\nInstead of using just Robot keyword, we should use: Robot NEAR/4 (Technology OR “machine” OR # tech OR IOT OR “Internet of things” ….)\n\nNote how keywords related to Robot are used with NEAR Operator. Related keywords could be related entities, industry keywords, parent company, country keywords, etc.\n\nFrequently asked questions\n\n​\n\nIs it compulsory to put quotation marks around phrases like \"apple music\" or can we use apple music directly?\n\nHow can I eliminate posts with many spam #’s or @’s?\n\nCan exact match or parent operators be used in advanced query?\n\nWhy am I able to see mentions in preview during making of topic but not in dashboard?\n\nDuring listening to @ mentions a lot of spam mentions are also getting tagged along, e.g. like wanting to get mentions of @tom but messages of @tom_fan56 are also coming. How to remove these irrelevant mentions?\n\nIf I write query as “tom” will it also fetch mentions such as tom_jerry / @tom / #tom ?\n\n​",
    "link": "https://www.sprinklr.com/help/articles/faqs-and-advanced-usecases/create-an-advanced-topic-query/646331628ea3c9635cf36711",
    "snippet": "Advanced topic query helps you to fetch relevant conversations by using advanced operators, nested parentheses, operators within operators, and much more. By ...",
    "title": "‎Create an Advanced Topic Query | Sprinklr Help Center"
  },
  {
    "content_readable": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL). To learn about the query language used by Resource Graph, start with the tutorial for KQL.\n\nThis article covers the language components supported by Resource Graph:\n\nUnderstanding the Azure Resource Graph query language\n\nResource Graph tables\nExtended properties\nResource Graph custom language elements\n\nShared query syntax (preview)\nSupported KQL language elements\n\nSupported tabular/top level operators\nQuery scope\nEscape characters\nNext steps\n\nResource Graph tables\n\nResource Graph provides several tables for the data it stores about Azure Resource Manager resource types and their properties. Resource Graph tables can be used with the join operator to get properties from related resource types.\n\nResource Graph tables support the join flavors:\n\ninnerunique\ninner\nleftouter\nfullouter\n\nResource Graph table Can join other tables? Description\nAdvisorResources Yes Includes resources related to Microsoft.Advisor.\nAlertsManagementResources Yes Includes resources related to Microsoft.AlertsManagement.\nAppServiceResources Yes Includes resources related to Microsoft.Web.\nAuthorizationResources Yes Includes resources related to Microsoft.Authorization.\nAWSResources Yes Includes resources related to Microsoft.AwsConnector.\nAzureBusinessContinuityResources Yes Includes resources related to Microsoft.AzureBusinessContinuity.\nChaosResources Yes Includes resources related to Microsoft.Chaos.\nCommunityGalleryResources Yes Includes resources related to Microsoft.Compute.\nComputeResources Yes Includes resources related to Microsoft.Compute Virtual Machine Scale Sets.\nDesktopVirtualizationResources Yes Includes resources related to Microsoft.DesktopVirtualization.\nDnsResources Yes Includes resources related to Microsoft.Network.\nEdgeOrderResources Yes Includes resources related to Microsoft.EdgeOrder.\nElasticsanResources Yes Includes resources related to Microsoft.ElasticSan.\nExtendedLocationResources Yes Includes resources related to Microsoft.ExtendedLocation.\nFeatureResources Yes Includes resources related to Microsoft.Features.\nGuestConfigurationResources Yes Includes resources related to Microsoft.GuestConfiguration.\nHealthResourceChanges Yes Includes resources related to Microsoft.Resources.\nHealthResources Yes Includes resources related to Microsoft.ResourceHealth.\nInsightsResources Yes Includes resources related to Microsoft.Insights.\nIoTSecurityResources Yes Includes resources related to Microsoft.IoTSecurity and Microsoft.IoTFirmwareDefense.\nKubernetesConfigurationResources Yes Includes resources related to Microsoft.KubernetesConfiguration.\nKustoResources Yes Includes resources related to Microsoft.Kusto.\nMaintenanceResources Yes Includes resources related to Microsoft.Maintenance.\nManagedServicesResources Yes Includes resources related to Microsoft.ManagedServices.\nMigrateResources Yes Includes resources related to Microsoft.OffAzure.\nNetworkResources Yes Includes resources related to Microsoft.Network.\nPatchAssessmentResources Yes Includes resources related to Azure Virtual Machines patch assessment Microsoft.Compute and Microsoft.HybridCompute.\nPatchInstallationResources Yes Includes resources related to Azure Virtual Machines patch installation Microsoft.Compute and Microsoft.HybridCompute.\nPolicyResources Yes Includes resources related to Microsoft.PolicyInsights.\nRecoveryServicesResources Yes Includes resources related to Microsoft.DataProtection and Microsoft.RecoveryServices.\nResourceChanges Yes Includes resources related to Microsoft.Resources.\nResourceContainerChanges Yes Includes resources related to Microsoft.Resources.\nResourceContainers Yes Includes management group (Microsoft.Management/managementGroups), subscription (Microsoft.Resources/subscriptions) and resource group (Microsoft.Resources/subscriptions/resourcegroups) resource types and data.\nResources Yes The default table if a table isn't defined in the query. Most Resource Manager resource types and properties are here.\nSecurityResources Yes Includes resources related to Microsoft.Security.\nServiceFabricResources Yes Includes resources related to Microsoft.ServiceFabric.\nServiceHealthResources Yes Includes resources related to Microsoft.ResourceHealth/events.\nSpotResources Yes Includes resources related to Microsoft.Compute.\nSupportResources Yes Includes resources related to Microsoft.Support.\nTagsResources Yes Includes resources related to Microsoft.Resources/tagnamespaces.\n\nFor a list of tables that includes resource types, go to Azure Resource Graph table and resource type reference.\n\nNote\n\nResources is the default table. While querying the Resources table, it isn't required to provide the table name unless join or union are used. But the recommended practice is to always include the initial table in the query.\n\nTo discover which resource types are available in each table, use Resource Graph Explorer in the portal. As an alternative, use a query such as \u003ctableName\u003e | distinct type to get a list of resource types the given Resource Graph table supports that exist in your environment.\n\nThe following query shows a simple join. The query result blends the columns together and any duplicate column names from the joined table, ResourceContainers in this example, are appended with 1. As ResourceContainers table has types for both subscriptions and resource groups, either type might be used to join to the resource from Resources table.\n\nResources\n| join ResourceContainers on subscriptionId\n| limit 1\n\n\nThe following query shows a more complex use of join. First, the query uses project to get the fields from Resources for the Azure Key Vault vaults resource type. The next step uses join to merge the results with ResourceContainers where the type is a subscription on a property that is both in the first table's project and the joined table's project. The field rename avoids join adding it as name1 since the property already is projected from Resources. The query result is a single key vault displaying type, the name, location, and resource group of the key vault, along with the name of the subscription it's in.\n\nResources\n| where type == 'microsoft.keyvault/vaults'\n| project name, type, location, subscriptionId, resourceGroup\n| join (ResourceContainers | where type=='microsoft.resources/subscriptions' | project SubName=name, subscriptionId) on subscriptionId\n| project type, name, location, resourceGroup, SubName\n| limit 1\n\n\nNote\n\nWhen limiting the join results with project, the property used by join to relate the two tables, subscriptionId in the above example, must be included in project.\n\nExtended properties\n\nAs a preview feature, some of the resource types in Resource Graph have more type-related properties available to query beyond the properties provided by Azure Resource Manager. This set of values, known as extended properties, exists on a supported resource type in properties.extended. To show resource types with extended properties, use the following query:\n\nResources\n| where isnotnull(properties.extended)\n| distinct type\n| order by type asc\n\n\nExample: Get count of virtual machines by instanceView.powerState.code:\n\nResources\n| where type == 'microsoft.compute/virtualmachines'\n| summarize count() by tostring(properties.extended.instanceView.powerState.code)\n\n\nResource Graph custom language elements\n\nShared query syntax (preview)\n\nAs a preview feature, a shared query can be accessed directly in a Resource Graph query. This scenario makes it possible to create standard queries as shared queries and reuse them. To call a shared query inside a Resource Graph query, use the {{shared-query-uri}} syntax. The URI of the shared query is the Resource ID of the shared query on the Settings page for that query. In this example, our shared query URI is /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS. This URI points to the subscription, resource group, and full name of the shared query we want to reference in another query. This query is the same as the one created in Tutorial: Create and share a query.\n\nNote\n\nYou can't save a query that references a shared query as a shared query.\n\nExample 1: Use only the shared query:\n\nThe results of this Resource Graph query are the same as the query stored in the shared query.\n\n{{/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS}}\n\n\nExample 2: Include the shared query as part of a larger query:\n\nThis query first uses the shared query, and then uses limit to further restrict the results.\n\n{{/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SharedQueries/providers/Microsoft.ResourceGraph/queries/Count VMs by OS}}\n| where properties_storageProfile_osDisk_osType =~ 'Windows'\n\n\nSupported KQL language elements\n\nResource Graph supports a subset of KQL data types, scalar functions, scalar operators, and aggregation functions. Specific tabular operators are supported by Resource Graph, some of which have different behaviors.\n\nSupported tabular/top level operators\n\nHere's the list of KQL tabular operators supported by Resource Graph with specific samples:\n\nKQL Resource Graph sample query Notes\ncount Count key vaults\ndistinct Show resources that contain storage\nextend Count virtual machines by OS type\njoin Key vault with subscription name Join flavors supported: innerunique, inner, leftouter, and fullouter. Limit of three join or union operations (or a combination of the two) in a single query, counted together, one of which might be a cross-table join. If all cross-table join use is between Resource and ResourceContainers, then three cross-table join are allowed. Custom join strategies, such as broadcast join, aren't allowed. For which tables can use join, go to Resource Graph tables.\nlimit List all public IP addresses Synonym of take. Doesn't work with Skip.\nmvexpand Legacy operator, use mv-expand instead. RowLimit max of 2,000. The default is 128.\nmv-expand List Azure Cosmos DB with specific write locations RowLimit max of 2,000. The default is 128. Limit of 3 mv-expand in a single query.\norder List resources sorted by name Synonym of sort\nparse Get virtual networks and subnets of network interfaces It's optimal to access properties directly if they exist instead of using parse.\nproject List resources sorted by name\nproject-away Remove columns from results\nsort List resources sorted by name Synonym of order\nsummarize Count Azure resources Simplified first page only\ntake List all public IP addresses Synonym of limit. Doesn't work with Skip.\ntop Show first five virtual machines by name and their OS type\nunion Combine results from two queries into a single result Single table allowed: | union [kind= inner|outer] [withsource=ColumnName] Table. Limit of three union legs in a single query. Fuzzy resolution of union leg tables isn't allowed. Might be used within a single table or between the Resources and ResourceContainers tables.\nwhere Show resources that contain storage\n\nThere's a default limit of three join and three mv-expand operators in a single Resource Graph SDK query. You can request an increase in these limits for your tenant through Help + support.\n\nTo support the Open Query portal experience, Azure Resource Graph Explorer has a higher global limit than Resource Graph SDK.\n\nNote\n\nYou can't reference a table as right table multiple times, which exceeds the limit of 1. If you do so, you would receive an error with code DisallowedMaxNumberOfRemoteTables.\n\nQuery scope\n\nThe scope of the subscriptions or management groups from which resources are returned by a query defaults to a list of subscriptions based on the context of the authorized user. If a management group or a subscription list isn't defined, the query scope is all resources, and includes Azure Lighthouse delegated resources.\n\nThe list of subscriptions or management groups to query can be manually defined to change the scope of the results. For example, the REST API managementGroups property takes the management group ID, which is different from the name of the management group. When managementGroups is specified, resources from the first 10,000 subscriptions in or under the specified management group hierarchy are included. managementGroups can't be used at the same time as subscriptions.\n\nExample: Query all resources within the hierarchy of the management group named My Management Group with ID myMG.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-03-01\n\n\nRequest Body\n\n{\n  \"query\": \"Resources | summarize count()\",\n  \"managementGroups\": [\"myMG\"]\n}\n\n\nThe AuthorizationScopeFilter parameter enables you to list Azure Policy assignments and Azure role-based access control (Azure RBAC) role assignments in the AuthorizationResources table that are inherited from upper scopes. The AuthorizationScopeFilter parameter accepts the following values for the PolicyResources and AuthorizationResources tables:\n\nAtScopeAndBelow (default if not specified): Returns assignments for the given scope and all child scopes.\nAtScopeAndAbove: Returns assignments for the given scope and all parent scopes, but not child scopes.\nAtScopeAboveAndBelow: Returns assignments for the given scope, all parent scopes, and all child scopes.\nAtScopeExact: Returns assignments only for the given scope; no parent or child scopes are included.\n\nNote\n\nTo use the AuthorizationScopeFilter parameter, be sure to use the 2021-06-01-preview or later API version in your requests.\n\nExample: Get all policy assignments at the myMG management group and Tenant Root (parent) scopes.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-06-01-preview\n\n\nRequest Body Sample\n\n{\n  \"options\": {\n    \"authorizationScopeFilter\": \"AtScopeAndAbove\"\n  },\n  \"query\": \"PolicyResources | where type =~ 'Microsoft.Authorization/PolicyAssignments'\",\n  \"managementGroups\": [\"myMG\"]\n}\n\n\nExample: Get all policy assignments at the mySubscriptionId subscription, management group, and Tenant Root scopes.\n\nREST API URI\n\nPOST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-06-01-preview\n\n\nRequest Body Sample\n\n{\n  \"options\": {\n    \"authorizationScopeFilter\": \"AtScopeAndAbove\"\n  },\n  \"query\": \"PolicyResources | where type =~ 'Microsoft.Authorization/PolicyAssignments'\",\n  \"subscriptions\": [\"mySubscriptionId\"]\n}\n\n\nEscape characters\n\nSome property names, such as those that include a . or $, must be wrapped or escaped in the query or the property name is interpreted incorrectly and doesn't provide the expected results.\n\nDot (.): Wrap the property name ['propertyname.withaperiod'] using brackets.\n\nExample query that wraps the property odata.type:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.['odata.type']\n\n\nDollar sign ($): Escape the character in the property name. The escape character used depends on the shell that runs Resource Graph.\n\nBash: Use a backslash (\\) as the escape character.\n\nExample query that escapes the property $type in Bash:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.\\$type\n\n\ncmd: Don't escape the dollar sign ($) character.\n\nPowerShell: Use a backtick (`) as the escape character.\n\nExample query that escapes the property $type in PowerShell:\n\nwhere type=~'Microsoft.Insights/alertRules' | project name, properties.condition.`$type\n\n\nNext steps\n\nAzure Resource Graph query language Starter queries and Advanced queries.\nLearn more about how to explore Azure resources.",
    "link": "https://learn.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language",
    "snippet": "The query language for the Azure Resource Graph supports many operators and functions. Each work and operate based on Kusto Query Language (KQL).",
    "title": "Understanding the Azure Resource Graph query language - Microsoft"
  }
]
Output (2728 bytes)
[
  {
    "text": "# Huginn — quick summary\n\nHuginn is an open-source, self-hosted automation server that runs configurable \"agents\" to monitor websites, APIs, social streams and trigger actions (think a hackable IFTTT/Zapier you run yourself). It supports many integrations, can be deployed via Docker/Heroku/manual install, and is extensible with external agent gems.\n\n## Most relevant sources\n\n1. **huginn/huginn — GitHub** — https://github.com/huginn/huginn\n   - The official repository describes Huginn as a system for building agents that read the web, watch for events, and act on your behalf, provides examples and full install/deploy instructions (Docker, local, Heroku, OpenShift), and notes extensibility (agent gems) and MIT licensing.\n\n2. **Huginn - Quickstart | Elest.io** — https://elest.io/open-source/huginn/resources/quickstart\n   - A concise quickstart and UI overview that explains core concepts (Agents, Scenarios, Credentials, Events, Jobs) and shows how to log in, create agents/scenarios, and manage background jobs on a hosted instance.\n\n3. **Huginn: An Open-Source, Self-Hosted IFTTT — dev.to (Heroku)** — https://dev.to/heroku/huginn-an-open-source-self-hosted-ifttt-5hd6\n   - A practical overview showing how Agents generate/consume JSON events to chain workflows, common use cases (monitoring, digests, price/shopping alerts, social tracking), and deployment options; it also highlights care around web scraping TOS.\n\n4. **An Introduction to Huginn — Mark's Kitchen (blog)** — https://marks.kitchen/blog/huginn/\n   - A hands-on user write-up describing migrating custom scripts into Huginn (example: an \"On This Day\" scenario built from many agents), practical pros/cons (steep learning curve, useful Docker + Traefik setup, moderate memory use), and real-world patterns (Website Agent, formatting, digests).\n\n5. **Huginn (organization page) — GitHub** — https://github.com/huginn\n   - The organization listing (including huginn_agent and docker specs) shows the project's ecosystem and tooling around creating external agents, testing, and Docker-related resources.\n\n## Overall takeaway \u0026 suggested next steps\n\nHuginn is a mature, flexible choice if you want full control over automation and data (self-hosting) and are comfortable with some server administration and initial configuration. To try it quickly, run the official Docker image or the Deploy-to-Heroku flow, follow the Elestio quickstart or GitHub README for setup, and inspect example scenarios/agents to learn patterns. Suggested follow-up searches: \"Huginn agent examples\", \"Huginn Docker memory optimization\", and \"huginn_agent gems\" to find ready-made agents and deployment tips."
  }
]
s5 slack success 2026-03-01 22:28:56 → 2026-03-01 22:28:56
Input (2728 bytes)
[
  {
    "text": "# Huginn — quick summary\n\nHuginn is an open-source, self-hosted automation server that runs configurable \"agents\" to monitor websites, APIs, social streams and trigger actions (think a hackable IFTTT/Zapier you run yourself). It supports many integrations, can be deployed via Docker/Heroku/manual install, and is extensible with external agent gems.\n\n## Most relevant sources\n\n1. **huginn/huginn — GitHub** — https://github.com/huginn/huginn\n   - The official repository describes Huginn as a system for building agents that read the web, watch for events, and act on your behalf, provides examples and full install/deploy instructions (Docker, local, Heroku, OpenShift), and notes extensibility (agent gems) and MIT licensing.\n\n2. **Huginn - Quickstart | Elest.io** — https://elest.io/open-source/huginn/resources/quickstart\n   - A concise quickstart and UI overview that explains core concepts (Agents, Scenarios, Credentials, Events, Jobs) and shows how to log in, create agents/scenarios, and manage background jobs on a hosted instance.\n\n3. **Huginn: An Open-Source, Self-Hosted IFTTT — dev.to (Heroku)** — https://dev.to/heroku/huginn-an-open-source-self-hosted-ifttt-5hd6\n   - A practical overview showing how Agents generate/consume JSON events to chain workflows, common use cases (monitoring, digests, price/shopping alerts, social tracking), and deployment options; it also highlights care around web scraping TOS.\n\n4. **An Introduction to Huginn — Mark's Kitchen (blog)** — https://marks.kitchen/blog/huginn/\n   - A hands-on user write-up describing migrating custom scripts into Huginn (example: an \"On This Day\" scenario built from many agents), practical pros/cons (steep learning curve, useful Docker + Traefik setup, moderate memory use), and real-world patterns (Website Agent, formatting, digests).\n\n5. **Huginn (organization page) — GitHub** — https://github.com/huginn\n   - The organization listing (including huginn_agent and docker specs) shows the project's ecosystem and tooling around creating external agents, testing, and Docker-related resources.\n\n## Overall takeaway \u0026 suggested next steps\n\nHuginn is a mature, flexible choice if you want full control over automation and data (self-hosting) and are comfortable with some server administration and initial configuration. To try it quickly, run the official Docker image or the Deploy-to-Heroku flow, follow the Elestio quickstart or GitHub README for setup, and inspect example scenarios/agents to learn patterns. Suggested follow-up searches: \"Huginn agent examples\", \"Huginn Docker memory optimization\", and \"huginn_agent gems\" to find ready-made agents and deployment tips."
  }
]
Output (2738 bytes)
[
  {
    "ok": true,
    "text": "# Huginn — quick summary\n\nHuginn is an open-source, self-hosted automation server that runs configurable \"agents\" to monitor websites, APIs, social streams and trigger actions (think a hackable IFTTT/Zapier you run yourself). It supports many integrations, can be deployed via Docker/Heroku/manual install, and is extensible with external agent gems.\n\n## Most relevant sources\n\n1. **huginn/huginn — GitHub** — https://github.com/huginn/huginn\n   - The official repository describes Huginn as a system for building agents that read the web, watch for events, and act on your behalf, provides examples and full install/deploy instructions (Docker, local, Heroku, OpenShift), and notes extensibility (agent gems) and MIT licensing.\n\n2. **Huginn - Quickstart | Elest.io** — https://elest.io/open-source/huginn/resources/quickstart\n   - A concise quickstart and UI overview that explains core concepts (Agents, Scenarios, Credentials, Events, Jobs) and shows how to log in, create agents/scenarios, and manage background jobs on a hosted instance.\n\n3. **Huginn: An Open-Source, Self-Hosted IFTTT — dev.to (Heroku)** — https://dev.to/heroku/huginn-an-open-source-self-hosted-ifttt-5hd6\n   - A practical overview showing how Agents generate/consume JSON events to chain workflows, common use cases (monitoring, digests, price/shopping alerts, social tracking), and deployment options; it also highlights care around web scraping TOS.\n\n4. **An Introduction to Huginn — Mark's Kitchen (blog)** — https://marks.kitchen/blog/huginn/\n   - A hands-on user write-up describing migrating custom scripts into Huginn (example: an \"On This Day\" scenario built from many agents), practical pros/cons (steep learning curve, useful Docker + Traefik setup, moderate memory use), and real-world patterns (Website Agent, formatting, digests).\n\n5. **Huginn (organization page) — GitHub** — https://github.com/huginn\n   - The organization listing (including huginn_agent and docker specs) shows the project's ecosystem and tooling around creating external agents, testing, and Docker-related resources.\n\n## Overall takeaway \u0026 suggested next steps\n\nHuginn is a mature, flexible choice if you want full control over automation and data (self-hosting) and are comfortable with some server administration and initial configuration. To try it quickly, run the official Docker image or the Deploy-to-Heroku flow, follow the Elestio quickstart or GitHub README for setup, and inspect example scenarios/agents to learn patterns. Suggested follow-up searches: \"Huginn agent examples\", \"Huginn Docker memory optimization\", and \"huginn_agent gems\" to find ready-made agents and deployment tips."
  }
]