List view
Currently, all URLs are run once a day. This milestone changes that and allow that to be configurable. It allows an admin to specify a list of URLs to be grouped together in a queue (instead of just default `.all()` queryset), and it allows that group/job to be specified with a frequency. This has a lot of dependencies, but this is a major end-game goal we are striving for. When you put in 10s to 100s of thousands of URLs, they are probably not all needed to run daily, but rather certain subsets run at different times/frequencies. A job is a named set of urls run together in a queue. The job should have a description and an "owner" (who wanted it) and should be able to set frequency of job and what urls are in it's set. This relies on comms/APIs and Lighthouse run config profile models and worker monitoring/management... a bunch of dependencies here that need to have issues added for.
No due date•0/1 issues closedCreate our typical admin console, showing things like data auditing and metrics and any functions/feature desired that are admin-only duties, so we don't have to use Django admin.
No due date•0/5 issues closedSetup functionality where multiple servers can be setup in a cluster to run tests (instead of just 1 app/server). This allows the # of concurrent test runners to scale enormously. The idea is that there would be a `queen` server that acts as the controller and it manages all the `worker` servers. Features needed: Get number of workers and which URLs are being processed. Pause all workers. Re-engage workers. Adjust number of workers. Query for known remote non-master servers. Query & adjust workload on remote servers. etc.
No due date•0/1 issues closedAdd the ability to for an admin to create Lighthouse config profile. Then allow any given config profile to be used for running a test against. For example there would be "mobile", "desktop", "slow 3g" profiles/settings and when a URL is run for a test, it would be accompanied by the settings to use for that test. The ID of the config profile would be passed back to the Django app to be stored in the LighthouseRun model so when viewing a report, you can see what settings were used for that run. This requires a full E2E set of updates to make all the pieces work together. This get messy when you start talking about the "average" data shown on the report detail page. That will have to be split up into "averages per config profile" tabs type display.
No due date•0/6 issues closed