57 Commits
v0.8 ... v0.9.9

Author SHA1 Message Date
relikd
4689e9fccb docs: update changelog + bump version 2022-12-21 01:03:53 +01:00
relikd
2e7cc026f6 fix: keep order of vgroups filter if no order_by provided 2022-12-21 00:50:32 +01:00
relikd
14a7fe848f docs: rearrange example into sections 2022-12-21 00:48:52 +01:00
relikd
227c4cdac9 chore: bump v.0.9.8 2022-12-20 02:51:44 +01:00
relikd
3139b5205a docs: add changelog 2022-12-20 01:28:32 +01:00
relikd
f32046dffb docs: update examples + readme 2022-12-20 01:28:12 +01:00
relikd
85df707d63 refactor: yield GroupBySource instead of slugified key 2022-12-20 00:11:51 +01:00
relikd
7582029abf refactor: init GroupBySource with Config 2022-12-20 00:11:05 +01:00
relikd
fb9a690f79 feat: build_all only once per GroupBySource 2022-12-08 01:35:19 +01:00
relikd
491c06e22f refactor: artifact pruning 2022-12-08 00:32:37 +01:00
relikd
7d668892a6 refactor: group resolver entries by config key 2022-12-05 23:04:51 +01:00
relikd
4b63fae4d6 fix: dont use query for children total count 2022-11-25 19:19:07 +01:00
relikd
521ac39a83 refactor: rename group -> key_obj 2022-11-22 19:41:07 +01:00
relikd
390d44a02c fix: undo improvement that breaks make_once(None) 2022-11-22 19:35:51 +01:00
relikd
7c324e5909 fix: Generator yield type 2022-11-22 18:56:01 +01:00
relikd
0891be06e2 fix: build queue and dependencies + add key_map_fn 2022-11-22 10:58:14 +01:00
relikd
e7ae59fadf chore: update types + minor fixes 2022-11-22 10:51:28 +01:00
relikd
b75102a211 feat: add support for pagination 2022-10-25 01:47:59 +02:00
relikd
7c98d74875 fix: throw no exception if print before finalize 2022-10-25 01:21:57 +02:00
relikd
3e60e536f5 fix: use typing hint for GroupBySource.slug 2022-10-25 01:20:48 +02:00
relikd
d58529f4cc fix: most_used_key with empty list 2022-10-24 21:34:27 +02:00
relikd
03475e3e5a feat: use Query for children instead of Record list 2022-08-06 18:36:48 +02:00
relikd
5387256b93 fix: split_strip() arg must be str 2022-08-06 17:45:38 +02:00
relikd
e67489ab0b chore: update examples and Readme 2022-08-03 08:17:26 +02:00
relikd
8e250fb665 feat: add order_by to group children 2022-08-03 08:16:56 +02:00
relikd
a0b53c7566 feat: add order_by to vgroups() 2022-07-23 20:34:04 +02:00
relikd
f13bd3dfc6 fix: GroupBySource not updated on template edit 2022-07-23 19:44:26 +02:00
relikd
fb8321744e feat: add support for alternatives 2022-07-23 13:58:46 +02:00
relikd
eb0a60ab33 v0.9.7 2022-04-22 14:43:07 +02:00
relikd
c149831808 keep order of vgroups 2022-04-19 23:21:20 +02:00
relikd
7f28c53107 gitignore rename dist-env 2022-04-13 22:27:33 +02:00
relikd
5118d19532 bump v0.9.6 2022-04-13 21:30:55 +02:00
relikd
1d9629566c efficient build
- postpone building until really needed
- rebuild only if artifacts change
- no build on source update
- prune takes current resolver state instead of global var
2022-04-13 15:41:57 +02:00
relikd
8ae5376d41 fix: most_used_key 2022-04-12 23:11:03 +02:00
relikd
340bc6611b one groupby per build thread + new resolver class 2022-04-11 01:41:17 +02:00
relikd
9dcd704283 move logic to VGroups.iter 2022-04-10 23:01:41 +02:00
relikd
d689a6cdf7 small fixes
- set child default object to field key
- strip whitespace if split
- ignore case for sort order
- setup.py package instead of module
2022-04-10 22:57:46 +02:00
relikd
b05dd31ff0 v0.9.5 2022-04-07 13:33:59 +02:00
relikd
16a26afdce fix data model enumeration with no flow blocks 2022-04-07 01:01:23 +02:00
relikd
c618ee458b v0.9.4 2022-04-06 22:12:06 +02:00
relikd
55916a4519 fix duplicate vobj for same slug 2022-04-06 20:52:53 +02:00
relikd
a694149d04 fix missing getitem 2022-04-06 17:55:27 +02:00
relikd
831cfa4e9c readme: link to relevant files 2022-04-06 17:36:19 +02:00
relikd
298e0d4a62 v0.9.3 2022-04-06 15:47:38 +02:00
relikd
2a6bdf05fd update example readme v0.9.3 2022-04-06 15:42:02 +02:00
relikd
df4be7c60a builtin filter collision rename groupby -> vgroups 2022-04-06 13:29:19 +02:00
relikd
637524a615 update example to v0.9.3 2022-04-06 13:16:44 +02:00
relikd
a6d9f715f9 allow {key} in slug + allow sorting and hashing 2022-04-06 13:11:49 +02:00
relikd
d6df547682 config.root trailing slash + allow any in fields 2022-04-06 12:29:35 +02:00
relikd
ebc29459ec remove ConfigKey and GroupKey types 2022-04-06 00:29:40 +02:00
relikd
adb26e343e split py into modules 2022-04-05 22:58:53 +02:00
relikd
97b40b4886 refactoring II (watcher config + dependency mgmt) 2022-04-05 20:29:15 +02:00
relikd
479ff9b964 add virtual path resolver
this allows the admin UI to preview groupby pages
2022-04-02 00:14:22 +02:00
relikd
626c0ab13a fix processed lookup 2022-04-01 13:34:35 +02:00
relikd
2de02ed50c add examples 2022-03-31 04:20:01 +02:00
relikd
c9c1ab69b1 complete refactoring to ensure group before build
Due to a concurrency bug (lektor/lektor#1017), a source file is
sporadically not updated because `before-build` is evaluated faster
than `before-build-all`. Fixed with a redundant build process.

Also:
- adds before- and after-init hooks
- encapsulates logic into separate classes
- fix virtual path and remove virtual path resolver
- more type hints (incl. bugfixes)
2022-03-31 04:16:18 +02:00
relikd
dfdf55a5a5 group before parent node + emit init-once 2022-03-27 10:11:29 +02:00
42 changed files with 2211 additions and 657 deletions

1
.gitattributes vendored Normal file
View File

@@ -0,0 +1 @@
examples/** linguist-documentation

2
.gitignore vendored
View File

@@ -1,5 +1,5 @@
.DS_Store .DS_Store
/dist-env/ /env-publish/
__pycache__/ __pycache__/
*.py[cod] *.py[cod]

163
CHANGELOG.md Normal file
View File

@@ -0,0 +1,163 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/0.9.8/),
and this project does adhere to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.9.9] 2022-12-21
### Fixed
- Keep original sorting order in `vgroups` filter if no `order_by` is set.
## [0.9.8] 2022-12-20
### Added
- Support for Alternatives
- Support for Pagination
- Support for additional yield types (str, int, float, bool)
- Support for sorting `GroupBySource` children
- Support for sorting `vgroups` filter
- Config option `.replace_none_key` to replace `None` with another value
- Config option `.key_obj_fn` (function) can be used to map complex objects to simple values (e.g., list of strings -> count as int). In your jinja template you may use `X` (the object) and `ARGS` (the `GroupByCallbackArgs`).
- New property `supports_pagination` (bool) for `GroupBySource`
- Partial building. Only process `Watcher` which are used during template rendering.
- Rebuild `GroupBySource` only once after a `Record` update
### Changed
- Use `Query` for children instead of `Record` list
- Rename `GroupBySource.group` to `GroupBySource.key_obj`
- Yield return `GroupBySource` during `watcher.grouping()` instead of slugified key
- Postpone `Record` processing until `make_once()`
- Allow preprocessing with `pre_build=True` as optional parameter for `groupby.add_watcher()` (useful for modifying source before build)
- Evaluate `fields` attributes upon access, not initialization (this comes with a more fine-grained dependency tracking)
- Resolver groups virtual pages per groupby config key (before it was just a list of all groupby sources mixed together)
- Refactor pruning by adding a `VirtualPruner` vobj
- Pruning is performed directly on the database
- `GroupBySource.path` may include a page number suffix `/2`
- `GroupBySource.url_path` may include a page number and custom `url_suffix`
### Removed
- `GroupingCallback` may no longer yield an extra object. The usage was cumbersome and can be replaced with the `.fields` config option.
### Fixed
- `GroupBySource` not updated on template edit
- `most_used_key` with empty list
- Don't throw exception if `GroupBySource` is printed before finalize
- Hotfix for Lektor issue #1085 by avoiding `TypeError`
- Add missing dependencies during `vgroups` filter
- Include model-fields with null value on yield
## [0.9.7] 2022-04-22
### Changed
- Refactor `GroupBySource` init method
- Decouple `fields` expression processing from init
### Fixed
- Keep order of groups intact
## [0.9.6] 2022-04-13
### Added
- Set extra-info default to the model-key that generated the group.
- Reuse previously declared `fields` attributes in later `fields`.
### Changed
- Thread-safe building. Each groupby is performed on the builder which initiated the build.
- Deferred building. The groupby callback is only called when it is accessed for the first time.
- Build-on-access. If there are no changes, no groupby build is performed.
### Fixed
- Inconsistent behavior due to concurrent building (see above)
- Case insensitive default group sort
- Using the split config-option now trims whitespace
- `most_used_key` working properly
## [0.9.5] 2022-04-07
### Fixed
- Allow model instances without flow-blocks
## [0.9.4] 2022-04-06
### Fixed
- Error handling for GroupBySource `__getitem__` by raising `__missing__`
- Reuse GroupBySource if two group names result in the same slug
## [0.9.3] 2022-04-06
### Added
- Config option `.fields` can add arbitrary attributes to the groupby
- Config option `.key_map` allows to replace keys with other values (e.g., "C#" -> "C-Sharp")
- Set `slug = None` to prevent rendering of groupby pages
- Query groupby of children
### Changed
- Another full refactoring, constantly changing, everything is different ... again
## [0.9.2] 2022-04-01
### Fixed
- Prevent duplicate processing of records
## [0.9.1] 2022-03-31
### Added
- Example project
- Before- and after-init hooks
- More type hints (incl. bugfixes)
### Changed
- Encapsulate logic into separate classes
### Fixed
- Concurrency issues by complete refactoring
- Virtual path and remove virtual path resolver
## [0.9] 2022-03-27
### Fixed
- Groupby is now generated before main page
- PyPi readme
## [0.8] 2022-03-25
Initial release
[Unreleased]: https://github.com/relikd/lektor-groupby-plugin/compare/v0.9.9...HEAD
[0.9.9]: https://github.com/relikd/lektor-groupby-plugin/compare/v0.9.8...v0.9.9
[0.9.8]: https://github.com/relikd/lektor-groupby-plugin/compare/v0.9.7...v0.9.8
[0.9.7]: https://github.com/relikd/lektor-groupby-plugin/compare/v0.9.6...v0.9.7
[0.9.6]: https://github.com/relikd/lektor-groupby-plugin/compare/v0.9.5...v0.9.6
[0.9.5]: https://github.com/relikd/lektor-groupby-plugin/compare/v0.9.4...v0.9.5
[0.9.4]: https://github.com/relikd/lektor-groupby-plugin/compare/v0.9.3...v0.9.4
[0.9.3]: https://github.com/relikd/lektor-groupby-plugin/compare/v0.9.2...v0.9.3
[0.9.2]: https://github.com/relikd/lektor-groupby-plugin/compare/v0.9.1...v0.9.2
[0.9.1]: https://github.com/relikd/lektor-groupby-plugin/compare/v0.9...v0.9.1
[0.9]: https://github.com/relikd/lektor-groupby-plugin/compare/v0.8...v0.9
[0.8]: https://github.com/relikd/lektor-groupby-plugin/releases/tag/v0.8

View File

@@ -1,21 +1,17 @@
.PHONY: help dist: setup.py lektor_groupby/*
help:
@echo 'commands:'
@echo ' dist'
dist-env:
@echo Creating virtual environment...
@python3 -m venv 'dist-env'
@source dist-env/bin/activate && pip install twine
.PHONY: dist
dist: dist-env
[ -z "$${VIRTUAL_ENV}" ] # you can not do this inside a virtual environment. [ -z "$${VIRTUAL_ENV}" ] # you can not do this inside a virtual environment.
rm -rf dist
@echo Building... @echo Building...
python3 setup.py sdist bdist_wheel python3 setup.py sdist bdist_wheel
@echo
rm -rf ./*.egg-info/ ./build/ MANIFEST rm -rf ./*.egg-info/ ./build/ MANIFEST
env-publish:
@echo Creating virtual environment...
@python3 -m venv 'env-publish'
@source env-publish/bin/activate && pip install twine
.PHONY: publish
publish: dist env-publish
[ -z "$${VIRTUAL_ENV}" ] # you can not do this inside a virtual environment.
@echo Publishing... @echo Publishing...
@echo "\033[0;31mEnter your PyPI token:\033[0m" @echo "\033[0;31mEnter PyPI token in password prompt:\033[0m"
@source dist-env/bin/activate && export TWINE_USERNAME='__token__' && twine upload dist/* @source env-publish/bin/activate && export TWINE_USERNAME='__token__' && twine upload dist/*

172
README.md
View File

@@ -1,168 +1,26 @@
# Lektor Plugin: groupby # Lektor Plugin: groupby
A generic grouping / clustering plugin. Can be used for tagging or similar tasks. A generic grouping / clustering plugin.
Can be used for tagging or similar tasks.
The grouping algorithm is performed once.
Contrary to, at least, cubic runtime if doing the same with Pad queries.
Overview: Install this plugin or modify your Lektor project file:
- the [basic example](#usage-basic-example) goes into detail how this plugin works.
- the [quick config](#usage-quick-config) example show how you can use the plugin config to setup a quick and easy tagging system.
- the [complex example](#usage-a-slightly-more-complex-example) touches on the potential of what is possible.
```sh
lektor plugin add groupby
```
## Usage: Basic example Optionally, enable a basic config:
Lets start with a simple example: adding a tags field to your model.
Assuming you have a `blog-entry.ini` that is used for all children of `/blog` path.
#### `models/blog-entry.ini`
```ini ```ini
[fields.tags] [tags]
label = Tags root = /
type = strings slug = tag/{key}.html
myvar = true template = tag.html
[fields.body]
label = Content
type = markdown
```
Notice we introduce a new attribute variable: `myvar = true`.
The name can be anything here, we will come to that later.
The only thing that matters is that the value is a boolean and set to true.
Edit your blog entry and add these two new tags:
```
Awesome
Latest News
```
Next, we need a plugin to add the groupby event listener.
#### `packages/test/lektor_my_tags_plugin.py`
```python
def on_groupby_init(self, groupby, **extra):
@groupby.watch('/blog', 'myvar', flatten=True, template='myvar.html',
slug='tag/{group}/index.html')
def do_myvar(args):
page = args.record # extract additional info from source
fieldKey, flowIndex, flowKey = args.key # or get field index directly
# val = page.get(fieldKey).blocks[flowIndex].get(flowKey)
value = args.field # list type since model is 'strings' type
for tag in value:
yield slugify(tag), {'val': tag, 'tags_in_page': len(value)}
```
There are a few important things here:
1. The first parameter (`'/blog'`) is the root page of the groupby.
All results will be placed under this directory, e.g., `/blog/tags/clean/`.
You can also just use `/`, in which case the same path would be `/tags/clean/`.
Or create multiple listeners, one for `/blog/` and another for `/projects/`, etc.
2. The second parameter (`'myvar'`) must be the same attribute variable we used in our `blog-entry.ini` model.
The groupby plugin will traverse all models and search for this attribute name.
3. Flatten determines how Flow elements are processed.
If `False`, the callback function `convert_myvar()` is called once per Flow element (if the Flow element has the `myvar` attribute attached).
If `True` (default), the callback is called for all Flow blocks individually.
4. The template `myvar.html` is used to render the grouping page.
This parameter is optional.
If no explicit template is set, the default template `groupby-myvar.html` would be used. Where `myvar` is replaced with whatever attribute you chose.
5. Finally, the slug `tag/{group}/index.html` is where the result is placed.
The default value for this parameter is `{attrib}/{group}/index.html`.
In our case, the default path would resolve to `myvar/awesome/index.html`.
We explicitly chose to replace the default slug with our own, which ignores the attrib path component and instead puts the result pages inside the `/tag` directory.
(PS: you could also use for example `t/{group}.html`, etc.)
So much for the `args` parameter.
The callback body **can** produce groupings but does not have to.
If you choose to produce an entry, you have to `yield` a tuple pair of `(groupkey, extra-info)`.
`groupkey` is used to combine & cluster pages and must be URL-safe.
The `extra-info` is passed through to your template file.
You can yield more than one entry per source or filter / ignore pages if you don't yield anything.
Our simple example will generate the output files `tag/awesome/index.html` and `tag/latest-news/index.html`.
Lets take a look at the html next.
#### `templates/myvar.html`
```html
<h2>Path: {{ this | url(absolute=True) }}</h2>
<div>This is: {{this}}</div>
<ul>
{%- for child in this.children %}
<li>Page: {{ child.record.path }}, Name: {{ child.extra.val }}, Tag count: {{ child.extra.tags_in_page }}</li>
{%- endfor %}
</ul>
```
Notice, we can use `child.record` to access the referenced page of the group cluster.
`child.extra` contains the additional information we previously passed into the template.
The final result of `tag/latest-news/index.html`:
```
Path: /tag/latest-news/
This is: <GroupBySource attribute="myvar" group="latest-news" template="myvar.html" slug="tag/latest-news/" children=1>
- Page: /blog/barss, Name: Latest News, Tag count: 2
```
## Usage: Quick config
The whole example above can be simplified with a plugin config:
#### `configs/groupby.ini`
```ini
[myvar]
root = /blog/
slug = tag/{group}/index.html
template = myvar.html
split = ' ' split = ' '
``` ```
You still need to add a separate attribute to your model (step 1), but anything else is handled by the config file. Or dive into plugin development...
All of these fields are optional and fallback to the default values stated above.
The newly introduced option `split` will be used as string delimiter. For usage examples, refer to the [examples](https://github.com/relikd/lektor-groupby-plugin/tree/main/examples) readme.
This allows to have a field with `string` type instead of `strings` type.
If you do not provide the `split` option, the whole field value will be used as group key.
Note: split is only used on str fields (`string` type), not lists (`strings` type).
The emitted `extra-info` for the child is the original key value.
E.g., `Latest News,Awesome` with `split = ,` yields `('latest-news', 'Latest News')` and `('awesome', 'Awesome')`.
## Usage: A slightly more complex example
There are situations though, where a simple config file is not enough.
The following plugin will find all model fields with attribute `inlinetags` and search for in-text occurrences of `{{Tagname}}` etc.
```python
from lektor.markdown import Markdown
from lektor.types.formats import MarkdownDescriptor
from lektor.utils import slugify
import re
_regex = re.compile(r'{{([^}]{1,32})}}')
def on_groupby_init(self, groupby, **extra):
@groupby.watch('/', 'inlinetags', slug='tags/{group}/')
def convert_inlinetags(args):
arr = args.field if isinstance(args.field, list) else [args.field]
for obj in arr:
if isinstance(obj, (Markdown, MarkdownDescriptor)):
obj = obj.source
if isinstance(obj, str) and str:
for match in _regex.finditer(obj):
tag = match.group(1)
yield slugify(tag), tag
```
This generic approach does not care what data-type the field value is:
`strings` fields will be expanded and enumerated, Markdown will be unpacked.
You can combine this mere tag-detector with text-replacements to point to the actual tags-page.

View File

@@ -0,0 +1,5 @@
[project]
name = GroupBy Examples
[packages]
lektor-groupby = 0.9.8

7
examples/Makefile Normal file
View File

@@ -0,0 +1,7 @@
.PHONY: server clean plugins
server:
lektor server
clean:
lektor clean --yes -v
plugins:
lektor plugins flush-cache && lektor plugins list

363
examples/README.md Normal file
View File

@@ -0,0 +1,363 @@
# Usage
Overview:
- [config example](#config-file) shows how you can use the plugin config to setup a quick and easy tagging system.
- [simple example](#simple-example) goes into detail how to use it in your own plugin.
- [advanced example](#advanced-example) touches on the potentials of plugin development.
- [Misc](#misc) shows other use-cases.
After reading this tutorial, have a look at other plugins that use `lektor-groupby`:
- [lektor-inlinetags](https://github.com/relikd/lektor-inlinetags-plugin)
## About
To use the groupby plugin you have to add an attribute to your model file.
For this tutorial you can refer to the [`models/page.ini`](./models/page.ini) model:
```ini
[fields.tags]
label = Tags
type = strings
testA = true
testB = true
[fields.body]
label = Body
type = markdown
testC = true
```
We define three custom attributes `testA`, `testB`, and `testC`.
You may add custom attributes to all of the fields.
It is crucial that the value of the custom attribute is set to true.
The attribute name is later used for grouping.
## Config File
The easiest way to add tags to your site is by defining the [`configs/groupby.ini`](./configs/groupby.ini) file.
```ini
[testA]
root = /
slug = config/{key}.html
template = example-config.html
split = ' '
enabled = True
key_obj_fn = (X.upper() ~ ARGS.key.fieldKey) if X else 'empty'
replace_none_key = unknown
[testA.children]
order_by = -title, body
[testA.pagination]
enabled = true
per_page = 5
url_suffix = .page.
[testA.fields]
title = "Tagged: " ~ this.key_obj
[testA.key_map]
C# = c-sharp
```
### Config: Main Section
The configuration parameter are:
1. The section title (`testA`) must be one of the attribute variables we defined in our model.
2. The `root` parameter (`/`) is the root page of the groupby.
All results will be placed under this directory, e.g., `/tags/tagname/`.
If you use `root = /blog`, the results path will be `/blog/tags/tagname/`.
The groupby plugin will traverse all sub-pages wich contain the attribute `testA`.
3. The `slug` parameter (`config/{key}.html`) is where the results are placed.
In our case, the path resolves to `config/tagname.html`.
The default value is `{attrib}/{key}/index.html` which would resolve to `testA/tagname/index.html`.
If this field contains `{key}`, it just replaces the value with the group-key.
In all other cases the field value is evaluated in a jinja context.
4. The `template`parameter (`example-config.html`) is used to render the results page.
If no explicit template is set, the default template `groupby-testA.html` will be used.
Where `testA` is replaced with whatever attribute you chose.
5. The `split` parameter (`' '`) will be used as string delimiter.
Fields of type `strings` and `checkboxes` are already lists and don't need splitting.
The split is only relevant for fields of type `string` or `text`.
These single-line fields are then expanded to lists as well.
If you do not provide the `split` option, the whole field value will be used as tagname.
6. The `enabled` parameter allows you to quickly disable the grouping.
7. The `key_obj_fn` parameter (jinja) accepts any function-like snippet or function call.
The context provides two variables, `X` and `ARGS`.
The former is the raw value of the grouping, this may be a text field, markdown, or whatever custom type you have provided.
The latter is a named tuple with `record`, `key`, and `field` values (see [simple example](#simple-example)).
8. The `replace_none_key` parameter (string) is applied after `key_obj_fn` (if provided) and maps empty values to a default value.
You can have multiple listeners, e.g., one for `/blog/` and another for `/projects/`.
Just create as many custom attributes as you like, each having its own section (and subsections).
### Config Subsection: .children
The `.children` subsection currently has a single config field: `order_by`.
The usual [order-by](https://www.getlektor.com/docs/guides/page-order/) rules apply (comma separated list of keys with `-` for reversed order).
The order-by key can be anything of the page attributes of the children.
### Config Subsection: .pagination
The `.pagination` subsection accepts the same configuration options as the default Lektor pagination ([model](https://www.getlektor.com/docs/models/children/#pagination), [guide](https://www.getlektor.com/docs/guides/pagination/)).
Plus, an additional `url_suffix` parameter if you would like to customize the URL scheme.
### Config Subsection: .fields
The `.fields` subsection is a list of key-value pairs which will be added as attributes to your grouping.
You can access them in your template (e.g., `{{this.title}}`).
All of the `.fields` values are evaluted in a jinja context, so be cautious when using plain strings.
Further, they are evaluated on access and not on define.
The built-in field attributes are:
- `key_obj`: model returned object, e.g., "A Title?"
- `key`: slugified value of `key_obj`, e.g., "a-title"
- `record`: parent node, e.g., `Page(path="/")`
- `slug`: url path under parent node, e.g. "config/a-title.html" (can be `None`)
- `children`: the elements of the grouping (a `Query` of `Record` type)
- `config`: configuration object (see below)
The `config` attribute contains the values that created the group:
- `key`: attribute key, e.g., `TestA`
- `root`: as provided by init, e.g., `/`
- `slug`: the raw value, e.g., `config/{key}.html`
- `template`: as provided by init, e.g., `example-config.html`
- `key_obj_fn`: as provided by init, e.g., `X.upper() if X else 'empty'`
- `replace_none_key`: as provided by init, e.g., `unknown`
- `enabled`: boolean
- `dependencies`: path to config file (if initialized from config)
- `fields`: raw values from `TestA.fields`
- `key_map`: raw values from `TestA.key_map`
- `pagination`: raw values from `TestA.pagination`
- `order_by`: list of key-strings from `TestA.children.order_by`
### Config Subsection: .key_map
Without any changes, the `key` value will just be `slugify(key_obj)`.
However, the `.key_map` subsection will replace `key_obj` with whatever replacement value is provided in the mapping and is then slugified.
Take the given example, `C# = c-sharp`, which would otherwise be slugified to `c`.
This is equivalent to `slugify(key_map.get(key_obj))`.
### Config Template
In your template file ([`templates/example-config.html`](./templates/example-config.html)), you have access to the aforementioned attributes:
```jinja
<h2>{{ this.title }}</h2>
<p>Key: {{ this.key }}, Attribute: {{ this.config.key }}</p>
<ul>
{%- for child in this.children %}
<li>Page: {{ child.path }}</li>
{%- endfor %}
</ul>
```
## Simple example
Relevant files:
- [`packages/simple-example/lektor_simple.py`](./packages/simple-example/lektor_simple.py)
- [`templates/example-simple.html`](./templates/example-simple.html)
```python
def on_groupby_before_build_all(self, groupby, builder, **extra):
watcher = groupby.add_watcher('testB', {
'root': '/blog',
'slug': 'simple/{key}/index.html',
'template': 'example-simple.html',
})
watcher.config.set_key_map({'Foo': 'bar'})
watcher.config.set_fields({'date': datetime.now()})
@watcher.grouping(flatten=True)
def convert_simple_example(args):
# Yield groups
value = args.field # type: list # since model is 'strings' type
for tag in value:
yield tag
```
This example is roughly equivalent to the config example above the parameters of the `groupby.add_watcher` function correspond to the same config parameters.
Additionally, you can set other types in `set_fields` (all strings are evaluated in jinja context!).
Refer to `lektor_simple.py` for all available configuration options.
The `@watcher.grouping` callback generates all groups for a single watcher-attribute.
The callback body **can** produce groupings but does not have to.
If you choose to produce an entry, you have to `yield` a grouping object (string, int, bool, float, or object).
In any case, `key_obj` is slugified (see above) and then used to combine & cluster pages.
You can yield more than one entry per source.
Or ignore pages if you don't yield anything.
The `@watcher.grouping` decorator takes one optional parameter:
- `flatten` determines how Flow elements are processed.
If `False`, the callback function is called once per Flow element.
If `True` (default), the callback is called for all Flow-Blocks of the Flow individually.
The attribute `testB` can be attached to either the Flow or a Flow-Block regardless.
The `args` parameter of the `convert_simple_example()` function is a named tuple, it has three attributes:
1. The `record` points to the `Page` record that contains the tag.
2. The `key` tuple `(field-key, flow-index, flow-key)` tells which field is processed.
For Flow types, `flow-index` and `flow-key` are set, otherwise they are `None`.
3. The `field` value is the content of the processed field.
The field value is equivalent to the following:
```python
k = args.key
field = args.record[k.fieldKey].blocks[k.flowIndex].get(k.flowKey)
```
Again, you can use all properties in your template.
```jinja
<p>Custom field date: {{this.date}}</p>
<ul>
{%- for child in this.children %}
<li>page "{{child.path}}" with tags: {{child.tags}}</li>
{%- endfor %}
</ul>
```
## Advanced example
Relevant files:
- [`configs/advanced.ini`](./configs/advanced.ini)
- [`packages/advanced-example/lektor_advanced.py`](./packages/advanced-example/lektor_advanced.py)
- [`templates/example-advanced.html`](./templates/example-advanced.html)
The following example is similar to the previous one.
Except that it loads a config file and replaces in-text occurrences of `{{Tagname}}` with `<a href="/tag/">Tagname</a>`.
```python
def on_groupby_before_build_all(self, groupby, builder, **extra):
# load config
config = self.get_config()
regex = config.get('testC.pattern.match')
try:
regex = re.compile(regex)
except Exception as e:
print('inlinetags.regex not valid: ' + str(e))
return
# load config directly (which also tracks dependency)
watcher = groupby.add_watcher('testC', config, pre_build=True)
@watcher.grouping()
def convert_replace_example(args):
# args.field assumed to be Markdown
obj = args.field.source
url_map = {} # type Dict[str, str]
for match in regex.finditer(obj):
tag = match.group(1)
vobj = yield tag
if not hasattr(vobj, 'custom_attr'):
vobj.custom_attr = []
vobj.custom_attr.append(tag)
url_map[tag] = vobj.url_path
print('[advanced] slugify:', tag, '->', vobj.key)
def _fn(match: re.Match) -> str:
tag = match.group(1)
return f'<a href="{url_map[tag]}">{tag}</a>'
args.field.source = regex.sub(_fn, obj)
```
Notice, `add_watcher` accepts a config file as parameter which keeps also track of dependencies and rebuilds pages when you edit the config file.
Further, the `yield` call returns a `GroupBySource` virtual object.
You can use this object to add custom static attributes (similar to dynamic attributes with the `.fields` subsection config).
Not all attributes are available at this time, as the grouping is still in progress.
But you can use `vobj.url_path` to get the target URL or `vobj.key` to get the slugified object-key (substitutions from `key_map` are already applied).
Usually, the grouping is postponed until the very end of the build process.
However, in this case we want to modify the source before it is build by Lektor.
For this situation we need to set `pre_build=True` in our `groupby.add_watcher()` call.
All watcher with this flag will be processed before any Page is built.
**Note:** If you can, avoid this performance regression.
The grouping for these watchers will be performed each time you navigate from one page to another.
This example uses a Markdown model type as source.
For Markdown fields, we can modify the `source` attribute directly.
All other field types need to be accessed via `args.record` key indirection (see [simple example](#simple-example)).
```ini
[testC]
root = /
slug = "advanced/{}/".format(this.key)
template = example-advanced.html
[testC.pattern]
match = {{([^}]{1,32})}}
```
The config file takes the same parameters as the [config example](#config-file).
We introduce a new config option `testC.pattern.match`.
This regular expression matches `{{` + any string less than 32 characters + `}}`.
Notice, the parenthesis (`()`) will match only the inner part, thus the replace function (`re.sub`) removes the `{{}}`.
## Misc
### Omit output with empty slugs
It was shortly mentioned above that slugs can be `None` (e.g., manually set to `slug = None`).
This is useful if you do not want to create subpages but rather an index page containing all groups.
You can combine this with the next use-case.
### Index pages & Group query + filter
```jinja
{%- for x in this|vgroups(keys=['TestA', 'TestB'], fields=[], flows=[], recursive=True, order_by='key_obj') %}
<a href="{{ x|url }}">({{ x.key_obj }})</a>
{%- endfor %}
```
You can query the groups of any parent node (including those without slug).
[`templates/page.html`](./templates/page.html) uses this.
The keys (`'TestA', 'TestB'`) can be omitted which will return all groups of all attributes (you can still filter them with `x.config.key == 'TestC'`).
The `fields` and `flows` params are also optional.
With these you can match groups in `args.key.fieldKey` and `args.key.flowKey`.
For example, if you have a “main-tags” field and an “additional-tags” field and you want to show the main-tags in a preview but both tags on a detail page.
### Sorting groups
Sorting is supported for the `vgroups` filter as well as for the children of each group (via config subsection `.children.order_by`).
Coming back to the previous example, `order_by` can be either a comma-separated string of keys or a list of strings.
Again, same [order-by](https://www.getlektor.com/docs/guides/page-order/) rules apply as for any other Lektor `Record`.
Only this time, the attributes of the `GroupBy` object are used for sorting (including those you defined in the `.fields` subsection).
### Pagination
You may use the `.pagination` subsection or `watcher.config.set_pagination()` to configure a pagination controller.
The `url_path` of a paginated Page depends on your `slug` value.
If the slug ends on `/` or `/index.html`, Lektor will append `page/2/index.html` to the second page.
If the slug contains a `.` (e.g. `/a/{key}.html`), Lektor will insert `page2` in front of the extension (e.g., `/a/{key}page2.html`).
If you supply a different `url_suffix`, for example “.X.”, those same two urls will become `.X./2/index.html` and `/a/{key}.X.2.html` respectively.

View File

@@ -0,0 +1,15 @@
[testC]
root = /
slug = "advanced/{}/".format(this.key)
template = example-advanced.html
[testC.pattern]
match = {{([^}]{1,32})}}
[testC.fields]
desc = "Input object: {}, output key: {}".format(this.key_obj, this.key)
[testC.key_map]
Blog = case-sensitive
Two = three
three = no-nested-replace

View File

@@ -0,0 +1,22 @@
[testA]
enabled = True
root = /
slug = config/{key}.html
template = example-config.html
split = ' '
key_obj_fn = '{}-z-{}'.format(X.upper(), ARGS.key.fieldKey) if X else None
replace_none_key = unknown
[testA.children]
order_by = -title, body
[testA.pagination]
enabled = true
per_page = 1
url_suffix = .page.
[testA.fields]
title = "Tagged: " ~ this.key_obj
[testA.key_map]
Blog = News

View File

@@ -0,0 +1,9 @@
_model: blog
---
title: Blog
---
tags:
Directory
Blog
samegroup

View File

@@ -0,0 +1,9 @@
title: Hello Website
---
body: This is an example blog post. {{Tag}} in {{blog}}
---
tags:
Initial
Blog Post
samegroup

View File

@@ -0,0 +1,8 @@
title: GroupBy Examples
---
body: Main body {{tag}} {{Two}}
---
tags:
Root
samegroup

View File

@@ -0,0 +1,9 @@
title: Projects
---
body:
This is a list of the projects:
* Project 1
* Project 2
* Project 3

View File

@@ -0,0 +1,4 @@
[model]
name = Blog Post
inherits = page
hidden = yes

7
examples/models/blog.ini Normal file
View File

@@ -0,0 +1,7 @@
[model]
name = Blog
inherits = page
hidden = yes
[children]
model = blog-post

18
examples/models/page.ini Normal file
View File

@@ -0,0 +1,18 @@
[model]
name = Page
label = {{ this.title }}
[fields.title]
label = Title
type = string
[fields.tags]
label = Tags
type = strings
testA = true
testB = true
[fields.body]
label = Body
type = markdown
testC = true

View File

@@ -0,0 +1,40 @@
# -*- coding: utf-8 -*-
from lektor.pluginsystem import Plugin
from typing import Generator
import re
from lektor_groupby import GroupBy, GroupByCallbackArgs
class AdvancedGroupByPlugin(Plugin):
def on_groupby_before_build_all(self, groupby: GroupBy, builder, **extra):
# load config
config = self.get_config()
regex = config.get('testC.pattern.match')
try:
regex = re.compile(regex)
except Exception as e:
print('inlinetags.regex not valid: ' + str(e))
return
# load config directly (which also tracks dependency)
watcher = groupby.add_watcher('testC', config, pre_build=True)
@watcher.grouping()
def _replace(args: GroupByCallbackArgs) -> Generator[str, str, None]:
# args.field assumed to be Markdown
obj = args.field.source
url_map = {} # type Dict[str, str]
for match in regex.finditer(obj):
tag = match.group(1)
vobj = yield tag
if not hasattr(vobj, 'custom_attr'):
vobj.custom_attr = []
# update static custom attribute
vobj.custom_attr.append(tag)
url_map[tag] = vobj.url_path
print('[advanced] slugify:', tag, '->', vobj.key)
def _fn(match: re.Match) -> str:
tag = match.group(1)
return f'<a href="{url_map[tag]}">{tag}</a>'
args.field.source = regex.sub(_fn, obj)

View File

@@ -0,0 +1,12 @@
from setuptools import setup
setup(
name='lektor-advanced',
py_modules=['lektor_advanced'],
version='1.0',
entry_points={
'lektor.plugins': [
'advanced = lektor_advanced:AdvancedGroupByPlugin',
]
}
)

View File

@@ -0,0 +1,41 @@
# -*- coding: utf-8 -*-
from lektor.pluginsystem import Plugin
from typing import Iterator, Tuple
from datetime import datetime
from lektor_groupby import GroupBy, GroupByCallbackArgs
class SimpleGroupByPlugin(Plugin):
def on_groupby_before_build_all(self, groupby: GroupBy, builder, **extra):
watcher = groupby.add_watcher('testB', {
'root': '/blog',
'slug': 'simple/{key}/index.html',
'template': 'example-simple.html',
'key_obj_fn': 'X.upper() if X else "empty"',
'replace_none_key': 'unknown',
})
watcher.config.set_key_map({'Foo': 'bar'})
watcher.config.set_fields({'date': datetime.now()})
watcher.config.set_order_by('-title,body')
watcher.config.set_pagination(
enabled=True,
per_page=1,
url_suffix='p',
)
@watcher.grouping(flatten=True)
def fn_simple(args: GroupByCallbackArgs) -> Iterator[Tuple[str, dict]]:
# Yield groups
value = args.field # type: list # since model is 'strings' type
for tag in value:
yield tag
# Everything below is just for documentation purposes
page = args.record # extract additional info from source
fieldKey, flowIndex, flowKey = args.key # or get field index
if flowIndex is None:
obj = page[fieldKey]
else:
obj = page[fieldKey].blocks[flowIndex].get(flowKey)
print('[simple] page:', page)
print('[simple] obj:', obj)
print('[simple] ')

View File

@@ -0,0 +1,12 @@
from setuptools import setup
setup(
name='lektor-simple',
py_modules=['lektor_simple'],
version='1.0',
entry_points={
'lektor.plugins': [
'simple = lektor_simple:SimpleGroupByPlugin',
]
}
)

View File

@@ -0,0 +1,4 @@
{% extends "page.html" %}
{% block body %}
{{ this.body }}
{% endblock %}

View File

@@ -0,0 +1,6 @@
{% extends "page.html" %}
{% block body %}
{% for child in this.children %}
<a href="{{child|url}}">{{ child.title }}</a>
{% endfor %}
{% endblock %}

View File

@@ -0,0 +1,5 @@
<h2>Path: {{ this | url(absolute=True) }}</h2>
<p>This is: {{this}}</p>
<p>Custom field, desc: "{{this.desc}}"</p>
<p>Custom static, seen objects: {{this.custom_attr}}</p>
<p>Children: {{this.children.all()}}</p>

View File

@@ -0,0 +1,9 @@
<h2>Path: {{ this | url(absolute=True) }}</h2>
<p>This is: {{this}}</p>
<p>Object: "{{this.key_obj}}", Key: "{{this.key}}"</p>
<p>Custom field title: "{{this.title}}"</p>
<ul>
{%- for child in this.children %}
<li>Child: <a href="{{child|url}}">{{child.title}}</a> ({{child.path}})</li>
{%- endfor %}
</ul>

View File

@@ -0,0 +1,10 @@
<h2>Path: {{ this | url(absolute=True) }}</h2>
<p>This is: {{this}}</p>
<p>Key: {{this.key}}</p>
<p>Object: {{this.key_obj}}</p>
<p>Custom field date: {{this.date}}</p>
<ul>
{%- for child in this.children %}
<li>page "{{child.path}}" with tags: {{child.tags}}</li>
{%- endfor %}
</ul>

View File

@@ -0,0 +1,29 @@
<!doctype html>
<meta charset="utf-8">
<title>{% block title %}Welcome{% endblock %}</title>
<style type="text/css">
header, footer { padding: 1em; background: #DDD; }
main { margin: 3em; }
</style>
<body>
<header>
<div>Nav: &nbsp;
<a href="{{ '/'|url }}">Root</a> &nbsp;
<a href="{{ '/blog'|url }}">Blog</a> &nbsp;
<a href="{{ '/projects'|url }}">Projects</a>
</div>
</header>
<main>
<h2>{{ this.title }}</h2>
{% block body %}{{ this.body }}{% endblock %}
</main>
<footer>
{%- for k, v in [('testA','Config'),('testB','Simple'),('testC','Advanced')] %}
<div>{{v}} Tags:
{%- for x in this|vgroups(k, recursive=True, order_by='key_obj') %}
<a href="{{ x|url }}">({{x.key}})</a>
{%- endfor %}
</div>
{%- endfor %}
</footer>
</body>

View File

@@ -1,479 +0,0 @@
# -*- coding: utf-8 -*-
import lektor.db # typing
from lektor.build_programs import BuildProgram
from lektor.builder import Artifact, Builder # typing
from lektor.pluginsystem import Plugin
from lektor.reporter import reporter
from lektor.sourceobj import SourceObject, VirtualSourceObject
from lektor.types.flow import Flow, FlowType
from lektor.utils import bool_from_string, build_url, prune_file_and_folder
# for quick config
from lektor.utils import slugify
from typing import \
NewType, NamedTuple, Tuple, Dict, Set, List, Optional, Iterator, Callable
VPATH = '@groupby' # potentially unsafe. All matching entries are pruned.
# -----------------------------------
# Typing
# -----------------------------------
FieldValue = NewType('FieldValue', object) # lektor model data-field value
AttributeKey = NewType('AttributeKey', str) # attribute of lektor model
GroupKey = NewType('GroupKey', str) # key of group-by
class FieldKeyPath(NamedTuple):
fieldKey: str
flowIndex: Optional[int] = None
flowKey: Optional[str] = None
class GroupByCallbackArgs(NamedTuple):
record: lektor.db.Record
key: FieldKeyPath
field: FieldValue
class GroupByCallbackYield(NamedTuple):
key: GroupKey
extra: object
GroupingCallback = Callable[[GroupByCallbackArgs],
Iterator[GroupByCallbackYield]]
class GroupProducer(NamedTuple):
attribute: AttributeKey
func: GroupingCallback
flatten: bool = True
slug: Optional[str] = None
template: Optional[str] = None
dependency: Optional[str] = None
class GroupComponent(NamedTuple):
record: lektor.db.Record
extra: object
class UrlResolverConf(NamedTuple):
attribute: AttributeKey
group: GroupKey
slug: Optional[str] = None
# -----------------------------------
# Actual logic
# -----------------------------------
class GroupBySource(VirtualSourceObject):
'''
Holds information for a single group/cluster.
This object is accessible in your template file.
Attributes: record, attribute, group, children, template, slug
:DEFAULTS:
template: "groupby-attribute.html"
slug: "{attrib}/{group}/index.html"
'''
def __init__(
self,
record: lektor.db.Record,
attribute: AttributeKey,
group: GroupKey,
children: List[GroupComponent] = [],
slug: Optional[str] = None, # default: "{attrib}/{group}/index.html"
template: Optional[str] = None, # default: "groupby-attribute.html"
dependency: Optional[str] = None
):
super().__init__(record)
self.attribute = attribute
self.group = group
self.children = children
self.template = template or 'groupby-{}.html'.format(self.attribute)
self.dependency = dependency
# custom user path
slug = slug or '{attrib}/{group}/index.html'
slug = slug.replace('{attrib}', self.attribute)
slug = slug.replace('{group}', self.group)
if slug.endswith('/index.html'):
slug = slug[:-10]
self.slug = slug
@property
def path(self) -> str:
# Used in VirtualSourceInfo, used to prune VirtualObjects
return build_url([self.record.path, VPATH, self.attribute, self.group])
@property
def url_path(self) -> str:
return build_url([self.record.path, self.slug])
def iter_source_filenames(self) -> Iterator[str]:
if self.dependency:
yield self.dependency
for record, _ in self.children:
yield from record.iter_source_filenames()
def __str__(self) -> str:
txt = '<GroupBySource'
for x in ['attribute', 'group', 'template', 'slug']:
txt += ' {}="{}"'.format(x, getattr(self, x))
return txt + ' children={}>'.format(len(self.children))
class GroupByBuildProgram(BuildProgram):
''' Generates Build-Artifacts and write files. '''
def produce_artifacts(self) -> None:
url = self.source.url_path
if url.endswith('/'):
url += 'index.html'
self.declare_artifact(url, sources=list(
self.source.iter_source_filenames()))
GroupByPruner.track(url)
def build_artifact(self, artifact: Artifact) -> None:
self.source.pad.db.track_record_dependency(self.source)
artifact.render_template_into(self.source.template, this=self.source)
# -----------------------------------
# Helper
# -----------------------------------
class GroupByPruner:
'''
Static collector for build-artifact urls.
All non-tracked VPATH-urls will be pruned after build.
'''
_cache: Set[str] = set()
# Note: this var is static or otherwise two instances of
# GroupByCreator would prune each others artifacts.
@classmethod
def track(cls, url: str) -> None:
cls._cache.add(url.lstrip('/'))
@classmethod
def prune(cls, builder: Builder) -> None:
''' Remove previously generated, unreferenced Artifacts. '''
dest_path = builder.destination_path
con = builder.connect_to_database()
try:
with builder.new_build_state() as build_state:
for url, file in build_state.iter_artifacts():
if url.lstrip('/') in cls._cache:
continue # generated in this build-run
infos = build_state.get_artifact_dependency_infos(url, [])
for v_path, _ in infos:
if VPATH not in v_path:
continue # we only care about groupby Virtuals
reporter.report_pruned_artifact(url)
prune_file_and_folder(file.filename, dest_path)
build_state.remove_artifact(url)
break # there is only one VPATH-entry per source
finally:
con.close()
cls._cache.clear()
# -----------------------------------
# Main Component
# -----------------------------------
class GroupByCreator:
'''
Process all children with matching conditions under specified page.
Creates a grouping of pages with similar (self-defined) attributes.
The grouping is performed only once per build (or manually invoked).
'''
def __init__(self):
self._flows: Dict[AttributeKey, Dict[str, Set[str]]] = {}
self._models: Dict[AttributeKey, Dict[str, Dict[str, str]]] = {}
self._func: Dict[str, Set[GroupProducer]] = {}
self._resolve_map: Dict[str, UrlResolverConf] = {} # only for server
self._watched_once: Set[GroupingCallback] = set()
# --------------
# Initialize
# --------------
def initialize(self, db: lektor.db):
self._flows.clear()
self._models.clear()
self._resolve_map.clear()
for prod_list in self._func.values():
for producer in prod_list:
self._register(db, producer.attribute)
def _register(self, db: lektor.db, attrib: AttributeKey) -> None:
''' Preparation: find models and flow-models which contain attrib '''
if attrib in self._flows or attrib in self._models:
return # already added
# find flow blocks with attrib
_flows = {} # Dict[str, Set[str]]
for key, flow in db.flowblocks.items():
tmp1 = set(f.name for f in flow.fields
if bool_from_string(f.options.get(attrib, False)))
if tmp1:
_flows[key] = tmp1
# find models with attrib or flow-blocks containing attrib
_models = {} # Dict[str, Dict[str, str]]
for key, model in db.datamodels.items():
tmp2 = {} # Dict[str, str]
for field in model.fields:
if bool_from_string(field.options.get(attrib, False)):
tmp2[field.name] = '*' # include all children
elif isinstance(field.type, FlowType):
if any(x in _flows for x in field.type.flow_blocks):
tmp2[field.name] = '?' # only some flow blocks
if tmp2:
_models[key] = tmp2
self._flows[attrib] = _flows
self._models[attrib] = _models
# ----------------
# Add Observer
# ----------------
def watch(
self,
root: str,
attrib: AttributeKey, *,
flatten: bool = True, # if False, dont explode FlowType
slug: Optional[str] = None, # default: "{attrib}/{group}/index.html"
template: Optional[str] = None, # default: "groupby-attrib.html"
dependency: Optional[str] = None
) -> Callable[[GroupingCallback], None]:
'''
Decorator to subscribe to attrib-elements. Converter for groupby().
Refer to groupby() for further details.
(record, field-key, field) -> (group-key, extra-info)
:DEFAULTS:
template: "groupby-attrib.html"
slug: "{attrib}/{group}/index.html"
'''
root = root.rstrip('/') + '/'
def _decorator(fn: GroupingCallback):
if root not in self._func:
self._func[root] = set()
self._func[root].add(
GroupProducer(attrib, fn, flatten, template, slug, dependency))
return _decorator
def watch_once(self, *args, **kwarg) -> Callable[[GroupingCallback], None]:
''' Same as watch() but listener is auto removed after build. '''
def _decorator(fn: GroupingCallback):
self._watched_once.add(fn)
self.watch(*args, **kwarg)(fn)
return _decorator
def remove_watch_once(self) -> None:
''' Remove all watch-once listeners. '''
for k, v in self._func.items():
not_once = {x for x in v if x.func not in self._watched_once}
self._func[k] = not_once
self._watched_once.clear()
# ----------
# Helper
# ----------
def iter_record_fields(
self,
source: lektor.db.Record,
attrib: AttributeKey,
flatten: bool = False
) -> Iterator[Tuple[FieldKeyPath, FieldValue]]:
''' Enumerate all fields of a lektor.db.Record with attrib = True '''
assert isinstance(source, lektor.db.Record)
_flows = self._flows.get(attrib, {})
_models = self._models.get(attrib, {})
for r_key, subs in _models.get(source.datamodel.id, {}).items():
if subs == '*': # either normal field or flow type (all blocks)
field = source[r_key]
if flatten and isinstance(field, Flow):
for i, flow in enumerate(field.blocks):
flowtype = flow['_flowblock']
for f_key, block in flow._data.items():
if f_key.startswith('_'): # e.g., _flowblock
continue
yield FieldKeyPath(r_key, i, f_key), block
else:
yield FieldKeyPath(r_key), field
else: # always flow type (only some blocks)
for i, flow in enumerate(source[r_key].blocks):
flowtype = flow['_flowblock']
for f_key in _flows.get(flowtype, []):
yield FieldKeyPath(r_key, i, f_key), flow[f_key]
def groupby(
self,
attrib: AttributeKey,
root: lektor.db.Record,
func: GroupingCallback,
flatten: bool = False,
incl_attachments: bool = True
) -> Dict[GroupKey, List[GroupComponent]]:
'''
Traverse selected root record with all children and group by func.
Func is called with (record, FieldKeyPath, FieldValue).
Func may yield one or more (group-key, extra-info) tuples.
return {'group-key': [(record, extra-info), ...]}
'''
assert callable(func), 'no GroupingCallback provided'
assert isinstance(root, lektor.db.Record)
tmap = {} # type: Dict[GroupKey, List[GroupComponent]]
recursive_list = [root] # type: List[lektor.db.Record]
while recursive_list:
record = recursive_list.pop()
if hasattr(record, 'children'):
# recursive_list += record.children
recursive_list.extend(record.children)
if incl_attachments and hasattr(record, 'attachments'):
# recursive_list += record.attachments
recursive_list.extend(record.attachments)
for key, field in self.iter_record_fields(record, attrib, flatten):
for ret in func(GroupByCallbackArgs(record, key, field)) or []:
assert isinstance(ret, (tuple, list)), \
'Must return tuple (group-key, extra-info)'
group_key, extras = ret
if group_key not in tmap:
tmap[group_key] = []
tmap[group_key].append(GroupComponent(record, extras))
return tmap
# -----------------
# Create groups
# -----------------
def should_process(self, node: SourceObject) -> bool:
''' Check if record path is being watched. '''
return isinstance(node, lektor.db.Record) \
and node.url_path in self._func
def make_cluster(self, root: lektor.db.Record) -> Iterator[GroupBySource]:
''' Group by attrib and build Artifacts. '''
assert isinstance(root, lektor.db.Record)
for attr, fn, fl, temp, slug, dep in self._func.get(root.url_path, []):
groups = self.groupby(attr, root, func=fn, flatten=fl)
for group_key, children in groups.items():
obj = GroupBySource(root, attr, group_key, children,
template=temp, slug=slug, dependency=dep)
self.track_dev_server_path(obj)
yield obj
# ------------------
# Path resolving
# ------------------
def resolve_virtual_path(
self, node: SourceObject, pieces: List[str]
) -> Optional[GroupBySource]:
''' Given a @VPATH/attrib/groupkey path, determine url path. '''
if len(pieces) >= 2:
attrib: AttributeKey = pieces[0] # type: ignore[assignment]
group: GroupKey = pieces[1] # type: ignore[assignment]
for attr, _, _, _, slug, _ in self._func.get(node.url_path, []):
if attr == attrib:
# TODO: do we need to provide the template too?
return GroupBySource(node, attr, group, slug=slug)
return None
def track_dev_server_path(self, sender: GroupBySource) -> None:
''' Dev server only: Add target path to reverse artifact url lookup '''
self._resolve_map[sender.url_path] = \
UrlResolverConf(sender.attribute, sender.group, sender.slug)
def resolve_dev_server_path(
self, node: SourceObject, pieces: List[str]
) -> Optional[GroupBySource]:
''' Dev server only: Resolve actual url to virtual obj. '''
prev = self._resolve_map.get(build_url([node.url_path] + pieces))
if prev:
attrib, group, slug = prev
return GroupBySource(node, attrib, group, slug=slug)
return None
# -----------------------------------
# Plugin Entry
# -----------------------------------
class GroupByPlugin(Plugin):
name = 'GroupBy Plugin'
description = 'Cluster arbitrary records with field attribute keyword.'
def on_setup_env(self, **extra):
self.creator = GroupByCreator()
self.env.add_build_program(GroupBySource, GroupByBuildProgram)
# let other plugins register their @groupby.watch functions
self.emit('init', groupby=self.creator)
# resolve /tag/rss/ -> /tag/rss/index.html (local server only)
@self.env.urlresolver
def groupby_path_resolver(node, pieces):
if self.creator.should_process(node):
return self.creator.resolve_dev_server_path(node, pieces)
# use VPATH in templates: {{ '/@groupby/attrib/group' | url }}
@self.env.virtualpathresolver(VPATH.lstrip('@'))
def groupby_virtualpath_resolver(node, pieces):
if self.creator.should_process(node):
return self.creator.resolve_virtual_path(node, pieces)
# injection to generate GroupBy nodes when processing artifacts
@self.env.generator
def groupby_generator(node):
if self.creator.should_process(node):
yield from self.creator.make_cluster(node)
def _quick_config(self):
config = self.get_config()
for attrib in config.sections():
sect = config.section_as_dict(attrib)
root = sect.get('root', '/')
slug = sect.get('slug')
temp = sect.get('template')
split = sect.get('split')
@self.creator.watch_once(root, attrib, template=temp, slug=slug,
dependency=self.config_filename)
def _fn(args):
val = args.field
if isinstance(val, str):
val = val.split(split) if split else [val] # make list
if isinstance(val, list):
for tag in val:
yield slugify(tag), tag
def on_before_build_all(self, builder, **extra):
# load config file quick listeners (before initialize!)
self._quick_config()
# parse all models to detect attribs of listeners
self.creator.initialize(builder.pad.db)
def on_after_build_all(self, builder, **extra):
# remove all quick listeners (will be added again in the next build)
self.creator.remove_watch_once()
def on_after_prune(self, builder, **extra):
# TODO: find better way to prune unreferenced elements
GroupByPruner.prune(builder)

View File

@@ -0,0 +1,4 @@
from .config import Config # noqa: F401
from .groupby import GroupBy # noqa: F401
from .plugin import GroupByPlugin # noqa: F401
from .watcher import GroupByCallbackArgs # noqa: F401

103
lektor_groupby/backref.py Normal file
View File

@@ -0,0 +1,103 @@
from lektor.context import get_ctx
from typing import TYPE_CHECKING, Set, List, Union, Iterable, Iterator
import weakref
from .util import split_strip
if TYPE_CHECKING:
from lektor.builder import Builder
from lektor.db import Record
from .groupby import GroupBy
from .model import FieldKeyPath
from .vobj import GroupBySource
class WeakVGroupsList(list):
def add(self, strong: 'FieldKeyPath', weak: 'GroupBySource') -> None:
super().append((strong, weakref.ref(weak)))
# super().append((strong, weak)) # strong-ref
class GroupByRef:
@staticmethod
def of(builder: 'Builder') -> 'GroupBy':
''' Get the GroupBy object of a builder. '''
return builder.__groupby # type:ignore[attr-defined,no-any-return]
@staticmethod
def set(builder: 'Builder', groupby: 'GroupBy') -> None:
''' Set the GroupBy object of a builder. '''
builder.__groupby = groupby # type: ignore[attr-defined]
class VGroups:
@staticmethod
def of(record: 'Record') -> WeakVGroupsList:
'''
Return the (weak) set of virtual objects of a page.
Creates a new set if it does not exist yet.
'''
try:
wset = record.__vgroups # type: ignore[attr-defined]
except AttributeError:
wset = WeakVGroupsList()
record.__vgroups = wset # type: ignore[attr-defined]
return wset # type: ignore[no-any-return]
@staticmethod
def iter(
record: 'Record',
keys: Union[str, Iterable[str], None] = None,
*,
fields: Union[str, Iterable[str], None] = None,
flows: Union[str, Iterable[str], None] = None,
recursive: bool = False,
order_by: Union[str, Iterable[str], None] = None,
) -> Iterator['GroupBySource']:
''' Extract all referencing groupby virtual objects from a page. '''
# prepare filter
if isinstance(keys, str):
keys = [keys]
if isinstance(fields, str):
fields = [fields]
if isinstance(flows, str):
flows = [flows]
# get GroupBy object
ctx = get_ctx()
if not ctx:
raise NotImplementedError("Shouldn't happen, where is my context?")
builder = ctx.build_state.builder
GroupByRef.of(builder).make_once(keys) # ensure did cluster before use
# find groups
proc_list = [record]
done_list = [] # type: List[GroupBySource]
while proc_list:
page = proc_list.pop(0)
if recursive and hasattr(page, 'children'):
proc_list.extend(page.children)
for key, vobj in VGroups.of(page):
if fields and key.fieldKey not in fields:
continue
if flows and key.flowKey not in flows:
continue
if keys and vobj().config.key not in keys:
continue
done_list.append(vobj())
# manage config dependencies
deps = set() # type: Set[str]
for vobj in done_list:
deps.update(vobj.config.dependencies)
# ctx.record_virtual_dependency(vobj) # TODO: needed? works without
for dep in deps:
ctx.record_dependency(dep)
if order_by:
if isinstance(order_by, str):
order = split_strip(order_by, ',') # type: Iterable[str]
elif isinstance(order_by, (list, tuple)):
order = order_by
else:
raise AttributeError('order_by must be str or list type.')
# using get_sort_key() of GroupBySource
yield from sorted(done_list, key=lambda x: x.get_sort_key(order))
else:
yield from done_list

200
lektor_groupby/config.py Normal file
View File

@@ -0,0 +1,200 @@
from inifile import IniFile
from lektor.environment import Expression
from lektor.context import Context
from lektor.utils import slugify as _slugify
from typing import (
TYPE_CHECKING, Set, Dict, Optional, Union, Any, List, Generator
)
from .util import split_strip
if TYPE_CHECKING:
from lektor.sourceobj import SourceObject
AnyConfig = Union['Config', IniFile, Dict]
class ConfigError(Exception):
''' Used to print a Lektor console error. '''
def __init__(
self, key: str, field: str, expr: str, error: Union[Exception, str]
):
self.key = key
self.field = field
self.expr = expr
self.error = error
def __str__(self) -> str:
return 'Invalid config for [{}.{}] = "{}" Error: {}'.format(
self.key, self.field, self.expr, repr(self.error))
class Config:
'''
Holds information for GroupByWatcher and GroupBySource.
This object is accessible in your template file ({{this.config}}).
Available attributes:
key, root, slug, template, enabled, dependencies, fields, key_map
'''
def __init__(
self,
key: str, *,
root: Optional[str] = None, # default: "/"
slug: Optional[str] = None, # default: "{attr}/{group}/index.html"
template: Optional[str] = None, # default: "groupby-{attr}.html"
replace_none_key: Optional[str] = None, # default: None
key_obj_fn: Optional[str] = None, # default: None
) -> None:
self.key = key
self.root = (root or '/').rstrip('/') or '/'
self.slug = slug or (key + '/{key}/') # key = GroupBySource.key
self.template = template or f'groupby-{self.key}.html'
self.replace_none_key = replace_none_key
self.key_obj_fn = key_obj_fn
# editable after init
self.enabled = True
self.dependencies = set() # type: Set[str]
self.fields = {} # type: Dict[str, Any]
self.key_map = {} # type: Dict[str, str]
self.pagination = {} # type: Dict[str, Any]
self.order_by = None # type: Optional[List[str]]
def slugify(self, k: str) -> str:
''' key_map replace and slugify. '''
rv = self.key_map.get(k, k)
return _slugify(rv) or rv # the `or` allows for example "_"
def set_fields(self, fields: Optional[Dict[str, Any]]) -> None:
'''
The fields dict is a mapping of attrib = Expression values.
Each dict key will be added to the GroupBySource virtual object.
Each dict value is passed through jinja context first.
'''
self.fields = fields or {}
def set_key_map(self, key_map: Optional[Dict[str, str]]) -> None:
''' This mapping replaces group keys before slugify. '''
self.key_map = key_map or {}
def set_pagination(
self,
enabled: Optional[bool] = None,
per_page: Optional[int] = None,
url_suffix: Optional[str] = None,
items: Optional[str] = None,
) -> None:
''' Used for pagination. '''
self.pagination = dict(
enabled=enabled,
per_page=per_page,
url_suffix=url_suffix,
items=items,
)
def set_order_by(self, order_by: Optional[str]) -> None:
''' If specified, children will be sorted according to keys. '''
self.order_by = split_strip(order_by or '', ',') or None
def __repr__(self) -> str:
txt = '<GroupByConfig'
for x in ['enabled', 'key', 'root', 'slug', 'template', 'key_obj_fn']:
txt += ' {}="{}"'.format(x, getattr(self, x))
txt += f' fields="{", ".join(self.fields)}"'
if self.order_by:
txt += ' order_by="{}"'.format(' ,'.join(self.order_by))
return txt + '>'
@staticmethod
def from_dict(key: str, cfg: Dict[str, str]) -> 'Config':
''' Set config fields manually. Allowed: key, root, slug, template. '''
return Config(
key=key,
root=cfg.get('root'),
slug=cfg.get('slug'),
template=cfg.get('template'),
replace_none_key=cfg.get('replace_none_key'),
key_obj_fn=cfg.get('key_obj_fn'),
)
@staticmethod
def from_ini(key: str, ini: IniFile) -> 'Config':
''' Read and parse ini file. Also adds dependency tracking. '''
cfg = ini.section_as_dict(key) # type: Dict[str, str]
conf = Config.from_dict(key, cfg)
conf.enabled = ini.get_bool(key + '.enabled', True)
conf.dependencies.add(ini.filename)
conf.set_fields(ini.section_as_dict(key + '.fields'))
conf.set_key_map(ini.section_as_dict(key + '.key_map'))
conf.set_pagination(
enabled=ini.get_bool(key + '.pagination.enabled', None),
per_page=ini.get_int(key + '.pagination.per_page', None),
url_suffix=ini.get(key + '.pagination.url_suffix'),
items=ini.get(key + '.pagination.items'),
)
conf.set_order_by(ini.get(key + '.children.order_by', None))
return conf
@staticmethod
def from_any(key: str, config: AnyConfig) -> 'Config':
assert isinstance(config, (Config, IniFile, Dict))
if isinstance(config, Config):
return config
elif isinstance(config, IniFile):
return Config.from_ini(key, config)
elif isinstance(config, Dict):
return Config.from_dict(key, config)
# -----------------------------------
# Field Expressions
# -----------------------------------
def _make_expression(self, expr: Any, *, on: 'SourceObject', field: str) \
-> Union[Expression, Any]:
''' Create Expression and report any config error. '''
if not isinstance(expr, str):
return expr
try:
return Expression(on.pad.env, expr)
except Exception as e:
raise ConfigError(self.key, field, expr, e)
def eval_field(self, attr: str, *, on: 'SourceObject') \
-> Union[Expression, Any]:
''' Create an expression for a custom defined user field. '''
# do not `gather_dependencies` because fields are evaluated on the fly
# dependency tracking happens whenever a field is accessed
return self._make_expression(
self.fields[attr], on=on, field='fields.' + attr)
def eval_slug(self, key: str, *, on: 'SourceObject') -> Optional[str]:
''' Either perform a "{key}" substitution or evaluate expression. '''
cfg_slug = self.slug
if not cfg_slug:
return None
if '{key}' in cfg_slug:
if key:
return cfg_slug.replace('{key}', key)
else:
raise ConfigError(self.key, 'slug', cfg_slug,
'Cannot replace {key} with None')
return None
else:
# TODO: do we need `gather_dependencies` here too?
expr = self._make_expression(cfg_slug, on=on, field='slug')
return expr.evaluate(on.pad, this=on, alt=on.alt) or None
def eval_key_obj_fn(self, *, on: 'SourceObject', context: Dict) -> Any:
'''
If `key_obj_fn` is set, evaluate field expression.
Note: The function does not check whether `key_obj_fn` is set.
Return: A Generator result is automatically unpacked into a list.
'''
exp = self._make_expression(self.key_obj_fn, on=on, field='key_obj_fn')
with Context(pad=on.pad) as ctx:
with ctx.gather_dependencies(self.dependencies.add):
res = exp.evaluate(on.pad, this=on, alt=on.alt, values=context)
if isinstance(res, Generator):
res = list(res) # unpack for 1-to-n replacement
return res

118
lektor_groupby/groupby.py Normal file
View File

@@ -0,0 +1,118 @@
from lektor.builder import PathCache
from lektor.db import Record # isinstance
from lektor.reporter import reporter # build
from typing import TYPE_CHECKING, List, Optional, Iterable
from .config import Config
from .watcher import Watcher
if TYPE_CHECKING:
from lektor.builder import Builder
from lektor.sourceobj import SourceObject
from .config import AnyConfig
from .resolver import Resolver
from .vobj import GroupBySource
class GroupBy:
'''
Process all children with matching conditions under specified page.
Creates a grouping of pages with similar (self-defined) attributes.
The grouping is performed only once per build.
'''
def __init__(self, resolver: 'Resolver') -> None:
self._building = False
self._watcher = [] # type: List[Watcher]
self._results = [] # type: List[GroupBySource]
self._pre_build_priority = [] # type: List[str] # config.key
self.resolver = resolver
@property
def isBuilding(self) -> bool:
return self._building
def add_watcher(
self, key: str, config: 'AnyConfig', *, pre_build: bool = False
) -> Watcher:
''' Init Config and add to watch list. '''
w = Watcher(Config.from_any(key, config))
self._watcher.append(w)
if pre_build:
self._pre_build_priority.append(w.config.key)
return w
def queue_all(self, builder: 'Builder') -> None:
''' Iterate full site-tree and queue all children. '''
# remove disabled watchers
self._watcher = [w for w in self._watcher if w.config.enabled]
if not self._watcher:
return
# initialize remaining (enabled) watchers
for w in self._watcher:
w.initialize(builder.pad)
# iterate over whole build tree
queue = builder.pad.get_all_roots() # type: List[SourceObject]
while queue:
record = queue.pop()
if hasattr(record, 'attachments'):
queue.extend(record.attachments)
if hasattr(record, 'children'):
queue.extend(record.children)
if isinstance(record, Record):
for w in self._watcher:
if w.should_process(record):
w.remember(record)
# build sources which need building before actual lektor build
if self._pre_build_priority:
self.make_once(self._pre_build_priority)
self._pre_build_priority.clear()
def make_once(self, filter_keys: Optional[Iterable[str]] = None) -> None:
'''
Perform groupby, iter over sources with watcher callback.
If `filter_keys` is set, ignore all other watchers.
'''
if not self._watcher:
return
remaining = []
for w in self._watcher:
# only process vobjs that are used somewhere
if filter_keys and w.config.key not in filter_keys:
remaining.append(w)
continue
self.resolver.reset(w.config.key)
# these are used in the current context (or on `build_all`)
for vobj in w.iter_sources():
# add original source
self._results.append(vobj)
self.resolver.add(vobj)
# and also add pagination sources
for sub_vobj in vobj.__iter_pagination_sources__():
self._results.append(sub_vobj)
self.resolver.add(sub_vobj)
# TODO: if this should ever run concurrently, pop() from watchers
self._watcher = remaining
def build_all(
self,
builder: 'Builder',
specific: Optional['GroupBySource'] = None
) -> None:
'''
Build actual artifacts (if needed).
If `specific` is set, only build the artifacts for that single vobj
'''
if not self._watcher and not self._results:
return
with reporter.build('groupby', builder): # type:ignore
# in case no page used the |vgroups filter
self.make_once([specific.config.key] if specific else None)
self._building = True
path_cache = PathCache(builder.env)
for vobj in self._results:
if specific and vobj.path != specific.path:
continue
if vobj.slug:
builder.build(vobj, path_cache)
del path_cache
self._building = False
self._results.clear() # garbage collect weak refs

67
lektor_groupby/model.py Normal file
View File

@@ -0,0 +1,67 @@
from lektor.db import Database, Record # typing
from lektor.types.flow import Flow, FlowType
from lektor.utils import bool_from_string
from typing import Set, Dict, Tuple, Any, NamedTuple, Optional, Iterator
class FieldKeyPath(NamedTuple):
fieldKey: str
flowIndex: Optional[int] = None
flowKey: Optional[str] = None
class ModelReader:
'''
Find models and flow-models which contain attribute.
Flows are either returned directly (flatten=False) or
expanded so that each flow-block is yielded (flatten=True)
'''
def __init__(self, db: Database, attr: str, flatten: bool = False) -> None:
self.flatten = flatten
self._flows = {} # type: Dict[str, Set[str]]
self._models = {} # type: Dict[str, Dict[str, str]]
# find flow blocks containing attribute
for key, flow in db.flowblocks.items():
tmp1 = set(f.name for f in flow.fields
if bool_from_string(f.options.get(attr, False)))
if tmp1:
self._flows[key] = tmp1
# find models and flow-blocks containing attribute
for key, model in db.datamodels.items():
tmp2 = {} # Dict[str, str]
for field in model.fields:
if bool_from_string(field.options.get(attr, False)):
tmp2[field.name] = '*' # include all children
elif isinstance(field.type, FlowType) and self._flows:
# only processed if at least one flow has attr
fbs = field.type.flow_blocks
# if fbs == None, all flow-blocks are allowed
if fbs is None or any(x in self._flows for x in fbs):
tmp2[field.name] = '?' # only some flow blocks
if tmp2:
self._models[key] = tmp2
def read(self, record: Record) -> Iterator[Tuple[FieldKeyPath, Any]]:
''' Enumerate all fields of a Record with attrib = True. '''
assert isinstance(record, Record)
for r_key, subs in self._models.get(record.datamodel.id, {}).items():
field = record[r_key]
if not field:
yield FieldKeyPath(r_key), field
continue
if subs == '*': # either normal field or flow type (all blocks)
if self.flatten and isinstance(field, Flow):
for i, flow in enumerate(field.blocks):
flowtype = flow['_flowblock']
for f_key, block in flow._data.items():
if f_key.startswith('_'): # e.g., _flowblock
continue
yield FieldKeyPath(r_key, i, f_key), block
else:
yield FieldKeyPath(r_key), field
else: # always flow type (only some blocks)
for i, flow in enumerate(field.blocks):
flowtype = flow['_flowblock']
for f_key in self._flows.get(flowtype, []):
yield FieldKeyPath(r_key, i, f_key), flow[f_key]

View File

@@ -0,0 +1,29 @@
from lektor import datamodel
from typing import TYPE_CHECKING, Any, Dict
if TYPE_CHECKING:
from lektor.environment import Environment
from lektor.pagination import Pagination
from lektor.sourceobj import SourceObject
class PaginationConfig(datamodel.PaginationConfig):
# because original method does not work for virtual sources.
def __init__(self, env: 'Environment', config: Dict[str, Any], total: int):
super().__init__(env, **config)
self._total_items_count = total
@staticmethod
def get_record_for_page(record: 'SourceObject', page_num: int) -> Any:
for_page = getattr(record, '__for_page__', None)
if callable(for_page):
return for_page(page_num)
return datamodel.PaginationConfig.get_record_for_page(record, page_num)
def count_total_items(self, record: 'SourceObject') -> int:
''' Override super() to prevent a record.children query. '''
return self._total_items_count
if TYPE_CHECKING:
def get_pagination_controller(self, record: 'SourceObject') \
-> 'Pagination':
...

102
lektor_groupby/plugin.py Normal file
View File

@@ -0,0 +1,102 @@
from lektor.assets import Asset # isinstance
from lektor.db import Record # isinstance
from lektor.pluginsystem import Plugin # subclass
from typing import TYPE_CHECKING, Set, Iterator, Any
from .backref import GroupByRef, VGroups
from .groupby import GroupBy
from .pruner import prune
from .resolver import Resolver
from .vobj import GroupBySource, GroupByBuildProgram
if TYPE_CHECKING:
from lektor.builder import Builder, BuildState
from lektor.sourceobj import SourceObject
from .watcher import GroupByCallbackArgs
class GroupByPlugin(Plugin):
name = 'GroupBy Plugin'
description = 'Cluster arbitrary records with field attribute keyword.'
def on_setup_env(self, **extra: Any) -> None:
self.resolver = Resolver(self.env)
self.env.add_build_program(GroupBySource, GroupByBuildProgram)
self.env.jinja_env.filters.update(vgroups=VGroups.iter)
# kep track of already rebuilt GroupBySource artifacts
self._is_build_all = False
self._has_been_built = set() # type: Set[str]
def on_before_build_all(self, **extra: Any) -> None:
self._is_build_all = True
def on_before_build(
self, builder: 'Builder', source: 'SourceObject', **extra: Any
) -> None:
# before-build may be called before before-build-all (issue #1017)
if isinstance(source, Asset):
return
# make GroupBySource available before building any Record artifact
groupby = self._init_once(builder)
# special handling for self-building of GroupBySource artifacts
if isinstance(source, GroupBySource):
if groupby.isBuilding: # build is during groupby.build_all()
self._has_been_built.add(source.path)
elif source.path not in self._has_been_built:
groupby.build_all(builder, source) # needs rebuilding
def on_after_build(
self, source: 'SourceObject', build_state: 'BuildState', **extra: Any
) -> None:
# a normal page update. We may need to re-build our GroupBySource
if not self._is_build_all and isinstance(source, Record):
if build_state.updated_artifacts:
# TODO: instead of clear(), only remove affected GroupBySource
# ideally, identify which file has triggered the re-build
self._has_been_built.clear()
def on_after_build_all(self, builder: 'Builder', **extra: Any) -> None:
# by now, most likely already built. So, build_all() is a no-op
self._init_once(builder).build_all(builder)
self._is_build_all = False
def on_after_prune(self, builder: 'Builder', **extra: Any) -> None:
# TODO: find a better way to prune unreferenced elements
prune(builder, self.resolver.files)
# ------------
# internal
# ------------
def _init_once(self, builder: 'Builder') -> GroupBy:
try:
return GroupByRef.of(builder)
except AttributeError:
groupby = GroupBy(self.resolver)
GroupByRef.set(builder, groupby)
self._load_quick_config(groupby)
# let other plugins register their @groupby.watch functions
self.emit('before-build-all', groupby=groupby, builder=builder)
groupby.queue_all(builder)
return groupby
def _load_quick_config(self, groupby: GroupBy) -> None:
''' Load config file quick listeners. '''
config = self.get_config()
for key in config.sections():
if '.' in key: # e.g., key.fields and key.key_map
continue
watcher = groupby.add_watcher(key, config)
split = config.get(key + '.split') # type: str
@watcher.grouping()
def _fn(args: 'GroupByCallbackArgs') -> Iterator[str]:
val = args.field
if isinstance(val, str) and val != '':
val = map(str.strip, val.split(split)) if split else [val]
elif isinstance(val, (bool, int, float)):
val = [val]
elif not val: # after checking for '', False, 0, and 0.0
val = [None]
if isinstance(val, (list, map)):
yield from val

77
lektor_groupby/pruner.py Normal file
View File

@@ -0,0 +1,77 @@
'''
Usage:
VirtualSourceObject.produce_artifacts()
-> remember url and later supply as `current_urls`
VirtualSourceObject.build_artifact()
-> `get_ctx().record_virtual_dependency(VirtualPruner())`
'''
from lektor.reporter import reporter # report_pruned_artifact
from lektor.sourceobj import VirtualSourceObject # subclass
from lektor.utils import prune_file_and_folder
import os
from typing import TYPE_CHECKING, Set, List, Iterable
if TYPE_CHECKING:
from lektor.builder import Builder
from sqlite3 import Connection
class VirtualPruner(VirtualSourceObject):
''' Indicate that a generated VirtualSourceObject has pruning support. '''
VPATH = '/@VirtualPruner'
def __init__(self) -> None:
self._path = VirtualPruner.VPATH # if needed, add suffix variable
@property
def path(self) -> str: # type: ignore[override]
return self._path
def prune(builder: 'Builder', current_urls: Iterable[str]) -> None:
''' Removes previously generated, but now unreferenced Artifacts. '''
dest_dir = builder.destination_path
con = builder.connect_to_database()
try:
previous = _query_prunable(con)
current = _normalize_urls(current_urls)
to_be_pruned = previous.difference(current)
for file in to_be_pruned:
reporter.report_pruned_artifact(file) # type: ignore
prune_file_and_folder(os.path.join(
dest_dir, file.strip('/').replace('/', os.path.sep)), dest_dir)
# if no exception raised, update db to remove obsolete references
_prune_db_artifacts(con, list(to_be_pruned))
finally:
con.close()
# ---------------------------
# Internal helper methods
# ---------------------------
def _normalize_urls(urls: Iterable[str]) -> Set[str]:
cache = set()
for url in urls:
if url.endswith('/'):
url += 'index.html'
cache.add(url.lstrip('/'))
return cache
def _query_prunable(conn: 'Connection') -> Set[str]:
''' Query database for artifacts that have the VirtualPruner dependency '''
cur = conn.cursor()
cur.execute('SELECT artifact FROM artifacts WHERE source = ?',
[VirtualPruner.VPATH])
return set(x for x, in cur.fetchall())
def _prune_db_artifacts(conn: 'Connection', urls: List[str]) -> None:
''' Remove obsolete artifact references from database. '''
MAX_VARS = 999 # Default SQLITE_MAX_VARIABLE_NUMBER.
cur = conn.cursor()
for i in range(0, len(urls), MAX_VARS):
batch = urls[i: i + MAX_VARS]
cur.execute('DELETE FROM artifacts WHERE artifact in ({})'.format(
','.join(['?'] * len(batch))), batch)
conn.commit()

80
lektor_groupby/query.py Normal file
View File

@@ -0,0 +1,80 @@
# adapting https://github.com/dairiki/lektorlib/blob/master/lektorlib/query.py
from lektor.constants import PRIMARY_ALT
from lektor.db import Query # subclass
from typing import TYPE_CHECKING, List, Optional, Generator, Iterable
if TYPE_CHECKING:
from lektor.db import Record, Pad
class FixedRecordsQuery(Query):
def __init__(
self, pad: 'Pad', child_paths: Iterable[str], alt: str = PRIMARY_ALT
):
''' Query with a pre-defined list of children of type Record. '''
super().__init__('/', pad, alt=alt)
self.__child_paths = [x.lstrip('/') for x in child_paths]
def _get(
self, path: str, persist: bool = True, page_num: Optional[int] = None
) -> Optional['Record']:
''' Internal getter for a single Record. '''
if path not in self.__child_paths:
return None
if page_num is None:
page_num = self._page_num
return self.pad.get( # type: ignore[no-any-return]
path, alt=self.alt, page_num=page_num, persist=persist)
def _iterate(self) -> Generator['Record', None, None]:
''' Iterate over internal set of Record elements. '''
# ignore self record dependency from super()
for path in self.__child_paths:
record = self._get(path, persist=False)
if record is None:
if self._page_num is not None:
# Sanity check: ensure the unpaginated version exists
unpaginated = self._get(path, persist=False, page_num=None)
if unpaginated is not None:
# Requested explicit page_num, but source does not
# support pagination. Punt and skip it.
continue
raise RuntimeError('could not load source for ' + path)
is_attachment = getattr(record, 'is_attachment', False)
if self._include_attachments and not is_attachment \
or self._include_pages and is_attachment:
continue
if self._matches(record):
yield record
def get_order_by(self) -> Optional[List[str]]:
''' Return list of attribute strings for sort order. '''
# ignore datamodel ordering from super()
return self._order_by # type: ignore[no-any-return]
def count(self) -> int:
''' Count matched objects. '''
if self._pristine:
return len(self.__child_paths)
return super().count() # type: ignore[no-any-return]
@property
def total(self) -> int:
''' Return total entries count (without any filter). '''
return len(self.__child_paths)
def get(self, path: str, page_num: Optional[int] = None) \
-> Optional['Record']:
''' Return Record with given path '''
if path in self.__child_paths:
return self._get(path, page_num=page_num)
return None
def __bool__(self) -> bool:
if self._pristine:
return len(self.__child_paths) > 0
return super().__bool__()
if TYPE_CHECKING:
def request_page(self, page_num: Optional[int]) -> 'FixedRecordsQuery':
...

View File

@@ -0,0 +1,95 @@
from lektor.db import Page # isinstance
from typing import TYPE_CHECKING, NamedTuple, Dict, List, Set, Any, Optional
from .util import build_url
from .vobj import VPATH, GroupBySource
if TYPE_CHECKING:
from lektor.environment import Environment
from lektor.sourceobj import SourceObject
from .config import Config
class ResolverEntry(NamedTuple):
key: str
key_obj: Any
config: 'Config'
page: Optional[int]
def equals(
self, path: str, conf_key: str, vobj_key: str, page: Optional[int]
) -> bool:
return self.key == vobj_key \
and self.config.key == conf_key \
and self.config.root == path \
and self.page == page
class Resolver:
'''
Resolve virtual paths and urls ending in /.
Init will subscribe to @urlresolver and @virtualpathresolver.
'''
def __init__(self, env: 'Environment') -> None:
self._data = {} # type: Dict[str, Dict[str, ResolverEntry]]
env.urlresolver(self.resolve_server_path)
env.virtualpathresolver(VPATH.lstrip('@'))(self.resolve_virtual_path)
@property
def has_any(self) -> bool:
return any(bool(x) for x in self._data.values())
@property
def files(self) -> Set[str]:
return set(y for x in self._data.values() for y in x.keys())
def reset(self, key: Optional[str] = None) -> None:
''' Clear previously recorded virtual objects. '''
if key:
if key in self._data: # only delete if exists
del self._data[key]
else:
self._data.clear()
def add(self, vobj: GroupBySource) -> None:
''' Track new virtual object (only if slug is set). '''
if vobj.slug:
# `page_num = 1` overwrites `page_num = None` -> same url_path()
if vobj.config.key not in self._data:
self._data[vobj.config.key] = {}
self._data[vobj.config.key][vobj.url_path] = ResolverEntry(
vobj.key, vobj.key_obj, vobj.config, vobj.page_num)
# ------------
# Resolver
# ------------
def resolve_server_path(self, node: 'SourceObject', pieces: List[str]) \
-> Optional[GroupBySource]:
''' Local server only: resolve /tag/rss/ -> /tag/rss/index.html '''
if isinstance(node, Page):
url = build_url([node.url_path] + pieces)
for subset in self._data.values():
rv = subset.get(url)
if rv:
return GroupBySource(
node, rv.key, rv.config, rv.page).finalize(rv.key_obj)
return None
def resolve_virtual_path(self, node: 'SourceObject', pieces: List[str]) \
-> Optional[GroupBySource]:
''' Admin UI only: Prevent server error and null-redirect. '''
# format: /path/to/page@groupby/{config-key}/{vobj-key}/{page-num}
if isinstance(node, Page) and len(pieces) >= 2:
path = node['_path'] # type: str
conf_key, vobj_key, *optional_page = pieces
page = None
if optional_page:
try:
page = int(optional_page[0])
except ValueError:
pass
for rv in self._data.get(conf_key, {}).values():
if rv.equals(path, conf_key, vobj_key, page):
return GroupBySource(
node, rv.key, rv.config, rv.page).finalize(rv.key_obj)
return None

62
lektor_groupby/util.py Normal file
View File

@@ -0,0 +1,62 @@
from typing import List, Dict, Optional, TypeVar
from typing import Callable, Any, Union, Generic
T = TypeVar('T')
def most_used_key(keys: List[T]) -> Optional[T]:
''' Find string with most occurrences. '''
if len(keys) < 3:
return keys[0] if keys else None # TODO: first vs last occurrence
best_count = 0
best_key = None
tmp = {} # type: Dict[T, int]
for k in keys:
num = (tmp[k] + 1) if k in tmp else 1
tmp[k] = num
if num > best_count: # TODO: (>) vs (>=), first vs last occurrence
best_count = num
best_key = k
return best_key
def split_strip(data: str, delimiter: str = ',') -> List[str]:
''' Split by delimiter and strip each str separately. Omit if empty. '''
ret = []
for x in data.split(delimiter):
x = x.strip()
if x:
ret.append(x)
return ret
def insert_before_ext(data: str, ins: str, delimiter: str = '.') -> str:
''' Insert text before last index of delimeter (or at the end). '''
assert delimiter in data, 'Could not insert before delimiter: ' + delimiter
idx = data.rindex(delimiter)
return data[:idx] + ins + data[idx:]
def build_url(parts: List[str]) -> str:
''' Build URL similar to lektor.utils.build_url '''
url = ''
for comp in parts:
txt = str(comp).strip('/')
if txt:
url += '/' + txt
if '.' not in url.rsplit('/', 1)[-1]:
url += '/'
return url or '/'
class cached_property(Generic[T]):
''' Calculate complex property only once. '''
def __init__(self, fn: Callable[[Any], T]) -> None:
self.fn = fn
def __get__(self, obj: object, typ: Union[type, None] = None) -> T:
if obj is None:
return self # type: ignore
ret = obj.__dict__[self.fn.__name__] = self.fn(obj)
return ret

287
lektor_groupby/vobj.py Normal file
View File

@@ -0,0 +1,287 @@
from lektor.build_programs import BuildProgram # subclass
from lektor.context import get_ctx
from lektor.db import _CmpHelper
from lektor.environment import Expression
from lektor.sourceobj import VirtualSourceObject # subclass
from typing import (
TYPE_CHECKING, List, Any, Dict, Optional, Generator, Iterator, Iterable
)
from .pagination import PaginationConfig
from .pruner import VirtualPruner
from .query import FixedRecordsQuery
from .util import most_used_key, insert_before_ext, build_url, cached_property
if TYPE_CHECKING:
from lektor.pagination import Pagination
from lektor.builder import Artifact
from lektor.db import Record
from .config import Config
VPATH = '@groupby' # potentially unsafe. All matching entries are pruned.
# -----------------------------------
# VirtualSource
# -----------------------------------
class GroupBySource(VirtualSourceObject):
'''
Holds information for a single group/cluster.
This object is accessible in your template file.
Attributes: record, key, key_obj, slug, children, config
'''
def __init__(
self,
record: 'Record',
key: str,
config: 'Config',
page_num: Optional[int] = None
) -> None:
super().__init__(record)
self.__children = [] # type: List[str]
self.__key_obj_map = [] # type: List[Any]
self._expr_fields = {} # type: Dict[str, Expression]
self.key = key
self.config = config
self.page_num = page_num
def append_child(self, child: 'Record', key_obj: Any) -> None:
if child not in self.__children:
self.__children.append(child.path)
# __key_obj_map is later used to find most used key_obj
self.__key_obj_map.append(key_obj)
def _update_attr(self, key: str, value: Any) -> None:
''' Set or remove Jinja evaluated Expression field. '''
# TODO: instead we could evaluate the fields only once.
# But then we need to record_dependency() every successive access
if isinstance(value, Expression):
self._expr_fields[key] = value
try:
delattr(self, key)
except AttributeError:
pass
else:
if key in self._expr_fields:
del self._expr_fields[key]
setattr(self, key, value)
# -------------------------
# Evaluate Extra Fields
# -------------------------
def finalize(self, key_obj: Optional[Any] = None) \
-> 'GroupBySource':
# make a sorted children query
self._query = FixedRecordsQuery(self.pad, self.__children, self.alt)
self._query._order_by = self.config.order_by
del self.__children
# set indexed original value (can be: str, int, float, bool, obj)
self.key_obj = key_obj or most_used_key(self.__key_obj_map)
del self.__key_obj_map
if key_obj: # exit early if initialized through resolver
return self
# extra fields
for attr in self.config.fields:
self._update_attr(attr, self.config.eval_field(attr, on=self))
return self
@cached_property
def slug(self) -> Optional[str]:
# evaluate slug Expression once we need it
slug = self.config.eval_slug(self.key, on=self)
if slug and slug.endswith('/index.html'):
slug = slug[:-10]
return slug
# -----------------------
# Pagination handling
# -----------------------
@property
def supports_pagination(self) -> bool:
return self.config.pagination['enabled'] # type: ignore[no-any-return]
@cached_property
def _pagination_config(self) -> 'PaginationConfig':
# Generate `PaginationConfig` once we need it
return PaginationConfig(self.record.pad.env, self.config.pagination,
self._query.total)
@cached_property
def pagination(self) -> 'Pagination':
# Generate `Pagination` once we need it
return self._pagination_config.get_pagination_controller(self)
def __iter_pagination_sources__(self) -> Iterator['GroupBySource']:
''' If pagination enabled, yields `GroupBySourcePage` sub-pages. '''
# Used in GroupBy.make_once() to generated paginated child sources
if self._pagination_config.enabled and self.page_num is None:
for page_num in range(self._pagination_config.count_pages(self)):
yield self.__for_page__(page_num + 1)
def __for_page__(self, page_num: Optional[int]) -> 'GroupBySource':
''' Get source object for a (possibly) different page number. '''
assert page_num is not None
return GroupBySourcePage(self, page_num)
# ---------------------
# Lektor properties
# ---------------------
@property
def path(self) -> str: # type: ignore[override]
# Used in VirtualSourceInfo, used to prune VirtualObjects
vpath = f'{self.record.path}{VPATH}/{self.config.key}/{self.key}'
if self.page_num:
vpath += '/' + str(self.page_num)
return vpath
@cached_property
def url_path(self) -> str: # type: ignore[override]
''' Actual path to resource as seen by the browser. '''
# check if slug is absolute URL
slug = self.slug
if slug and slug.startswith('/'):
parts = [self.pad.get_root(alt=self.alt).url_path]
else:
parts = [self.record.url_path]
# slug can be None!!
if not slug:
return build_url(parts)
# if pagination enabled, append pagination.url_suffix to path
if self.page_num and self.page_num > 1:
sffx = self._pagination_config.url_suffix
if '.' in slug.rsplit('/', 1)[-1]:
# default: ../slugpage2.html (use e.g.: url_suffix = .page.)
parts.append(insert_before_ext(
slug, sffx + str(self.page_num), '.'))
else:
# default: ../slug/page/2/index.html
parts += [slug, sffx, self.page_num]
else:
parts.append(slug)
return build_url(parts)
def iter_source_filenames(self) -> Generator[str, None, None]:
''' Enumerate all dependencies '''
if self.config.dependencies:
yield from self.config.dependencies
for record in self.children:
yield from record.iter_source_filenames()
# def get_checksum(self, path_cache: 'PathCache') -> Optional[str]:
# deps = [self.pad.env.jinja_env.get_or_select_template(
# self.config.template).filename]
# deps.extend(self.iter_source_filenames())
# sums = '|'.join(path_cache.get_file_info(x).filename_and_checksum
# for x in deps if x) + str(self.children.count())
# return hashlib.sha1(sums.encode('utf-8')).hexdigest() if sums else None
def get_sort_key(self, fields: Iterable[str]) -> List:
def cmp_val(field: str) -> Any:
reverse = field.startswith('-')
if reverse or field.startswith('+'):
field = field[1:]
return _CmpHelper(getattr(self, field, None), reverse)
return [cmp_val(field) for field in fields or []]
# -----------------------
# Properties & Helper
# -----------------------
@property
def children(self) -> FixedRecordsQuery:
''' Return query of children of type Record. '''
return self._query
def __getitem__(self, key: str) -> Any:
# Used for virtual path resolver
if key in ('_path', '_alt'):
return getattr(self, key[1:])
return self.__missing__(key)
def __getattr__(self, key: str) -> Any:
''' Lazy evaluate custom user field expressions. '''
if key in self._expr_fields:
expr = self._expr_fields[key]
return expr.evaluate(self.pad, this=self, alt=self.alt)
raise AttributeError
def __lt__(self, other: 'GroupBySource') -> bool:
# Used for |sort filter (`key_obj` is the indexed original value)
if isinstance(self.key_obj, (bool, int, float)) and \
isinstance(other.key_obj, (bool, int, float)):
return self.key_obj < other.key_obj
if self.key_obj is None:
return False # this will sort None at the end
if other.key_obj is None:
return True
return str(self.key_obj).lower() < str(other.key_obj).lower()
def __eq__(self, other: object) -> bool:
# Used for |unique filter
if self is other:
return True
return isinstance(other, GroupBySource) and \
self.path == other.path and self.slug == other.slug
def __hash__(self) -> int:
# Used for hashing in set and dict
return hash((self.path, self.slug))
def __repr__(self) -> str:
return '<GroupBySource path="{}" children={}>'.format(
self.path,
self.children.count() if hasattr(self, 'children') else '?')
# -----------------------------------
# BuildProgram
# -----------------------------------
class GroupByBuildProgram(BuildProgram):
''' Generate Build-Artifacts and write files. '''
def produce_artifacts(self) -> None:
pagination_enabled = self.source._pagination_config.enabled
if pagination_enabled and self.source.page_num is None:
return # only __iter_pagination_sources__()
url = self.source.url_path
if url.endswith('/'):
url += 'index.html'
self.declare_artifact(url, sources=list(
self.source.iter_source_filenames()))
def build_artifact(self, artifact: 'Artifact') -> None:
get_ctx().record_virtual_dependency(VirtualPruner())
artifact.render_template_into(
self.source.config.template, this=self.source)
class GroupBySourcePage(GroupBySource):
''' Pagination wrapper. Redirects get attr/item to non-paginated node. '''
def __init__(self, parent: 'GroupBySource', page_num: int) -> None:
self.__parent = parent
self.page_num = page_num
def __for_page__(self, page_num: Optional[int]) -> 'GroupBySource':
''' Get source object for a (possibly) different page number. '''
if page_num is None:
return self.__parent
if page_num == self.page_num:
return self
return GroupBySourcePage(self.__parent, page_num)
def __getitem__(self, key: str) -> Any:
return self.__parent.__getitem__(key)
def __getattr__(self, key: str) -> Any:
return getattr(self.__parent, key)
def __repr__(self) -> str:
return '<GroupBySourcePage path="{}" page={}>'.format(
self.__parent.path, self.page_num)

148
lektor_groupby/watcher.py Normal file
View File

@@ -0,0 +1,148 @@
from typing import (
TYPE_CHECKING, Dict, List, Any, Union, NamedTuple,
Optional, Callable, Iterator, Generator
)
from .backref import VGroups
from .model import ModelReader
from .vobj import GroupBySource
if TYPE_CHECKING:
from lektor.db import Pad, Record
from .config import Config
from .model import FieldKeyPath
class GroupByCallbackArgs(NamedTuple):
record: 'Record'
key: 'FieldKeyPath'
field: Any # lektor model data-field value
GroupingCallback = Callable[[GroupByCallbackArgs], Union[
Iterator[Any],
Generator[Any, Optional[GroupBySource], None],
]]
class Watcher:
'''
Callback is called with (Record, FieldKeyPath, field-value).
Callback may yield 0-n objects.
'''
def __init__(self, config: 'Config') -> None:
self.config = config
self._root = self.config.root
def grouping(self, flatten: bool = True) \
-> Callable[[GroupingCallback], None]:
'''
Decorator to subscribe to attrib-elements.
If flatten = False, dont explode FlowType.
(record, field-key, field) -> value
'''
def _decorator(fn: GroupingCallback) -> None:
self.flatten = flatten
self.callback = fn
return _decorator
def initialize(self, pad: 'Pad') -> None:
''' Reset internal state. You must initialize before each build! '''
assert callable(self.callback), 'No grouping callback provided.'
self._model_reader = ModelReader(pad.db, self.config.key, self.flatten)
self._root_record = {} # type: Dict[str, Record]
self._state = {} # type: Dict[str, Dict[Optional[str], GroupBySource]]
self._rmmbr = [] # type: List[Record]
for alt in pad.config.iter_alternatives():
self._root_record[alt] = pad.get(self._root, alt=alt)
self._state[alt] = {}
def should_process(self, node: 'Record') -> bool:
''' Check if record path is being watched. '''
return str(node['_path']).startswith(self._root)
def process(self, record: 'Record') -> None:
'''
Will iterate over all record fields and call the callback method.
Each record is guaranteed to be processed only once.
'''
for key, field in self._model_reader.read(record):
args = GroupByCallbackArgs(record, key, field)
_gen = self.callback(args)
try:
key_obj = next(_gen)
while True:
if self.config.key_obj_fn:
vobj = self._persist_multiple(args, key_obj)
else:
vobj = self._persist(args, key_obj)
# return groupby virtual object and continue iteration
if isinstance(_gen, Generator) and not _gen.gi_yieldfrom:
key_obj = _gen.send(vobj)
else:
key_obj = next(_gen)
except StopIteration:
del _gen
def _persist_multiple(self, args: 'GroupByCallbackArgs', obj: Any) \
-> Optional[GroupBySource]:
# if custom key mapping function defined, use that first
res = self.config.eval_key_obj_fn(on=args.record,
context={'X': obj, 'ARGS': args})
if isinstance(res, (list, tuple)):
for k in res:
self._persist(args, k) # 1-to-n replacement
return None
return self._persist(args, res) # normal & null replacement
def _persist(self, args: 'GroupByCallbackArgs', obj: Any) \
-> Optional[GroupBySource]:
''' Update internal state. Return grouping parent. '''
if not isinstance(obj, (str, bool, int, float)) and obj is not None:
raise ValueError(
'Unsupported groupby yield type for [{}]:'
' {} (expected str, got {})'.format(
self.config.key, obj, type(obj).__name__))
if obj is None:
# if obj is not set, test if config.replace_none_key is set
slug = self.config.replace_none_key
obj = slug
else:
# if obj is set, apply config.key_map (convert int -> str)
slug = self.config.slugify(str(obj)) or None
# if neither custom mapping succeeded, do not process further
if not slug or obj is None:
return None
# update internal object storage
alt = args.record.alt
if slug not in self._state[alt]:
src = GroupBySource(self._root_record[alt], slug, self.config)
self._state[alt][slug] = src
else:
src = self._state[alt][slug]
src.append_child(args.record, obj)
# reverse reference
VGroups.of(args.record).add(args.key, src)
return src
def remember(self, record: 'Record') -> None:
self._rmmbr.append(record)
def iter_sources(self) -> Iterator[GroupBySource]:
''' Prepare and yield GroupBySource elements. '''
for x in self._rmmbr:
self.process(x)
del self._rmmbr
for vobj_list in self._state.values():
for vobj in vobj_list.values():
yield vobj.finalize()
# cleanup. remove this code if you'd like to iter twice
del self._model_reader
del self._root_record
del self._state
def __repr__(self) -> str:
return '<GroupByWatcher key="{}" enabled={}>'.format(
self.config.key, self.config.enabled)

View File

@@ -1,11 +1,11 @@
from setuptools import setup from setuptools import setup
with open('README.md') as fp: with open('README.md', encoding='utf8') as fp:
longdesc = fp.read() longdesc = fp.read()
setup( setup(
name='lektor-groupby', name='lektor-groupby',
py_modules=['lektor_groupby'], packages=['lektor_groupby'],
entry_points={ entry_points={
'lektor.plugins': [ 'lektor.plugins': [
'groupby = lektor_groupby:GroupByPlugin', 'groupby = lektor_groupby:GroupByPlugin',
@@ -13,7 +13,7 @@ setup(
}, },
author='relikd', author='relikd',
url='https://github.com/relikd/lektor-groupby-plugin', url='https://github.com/relikd/lektor-groupby-plugin',
version='0.8', version='0.9.9',
description='Cluster arbitrary records with field attribute keyword.', description='Cluster arbitrary records with field attribute keyword.',
long_description=longdesc, long_description=longdesc,
long_description_content_type="text/markdown", long_description_content_type="text/markdown",
@@ -27,7 +27,6 @@ setup(
'cluster', 'cluster',
], ],
classifiers=[ classifiers=[
'Development Status :: 5 - Production/Stable',
'Environment :: Web Environment', 'Environment :: Web Environment',
'Environment :: Plugins', 'Environment :: Plugins',
'Framework :: Lektor', 'Framework :: Lektor',