12 Commits

Author SHA1 Message Date
relikd
5118d19532 bump v0.9.6 2022-04-13 21:30:55 +02:00
relikd
1d9629566c efficient build
- postpone building until really needed
- rebuild only if artifacts change
- no build on source update
- prune takes current resolver state instead of global var
2022-04-13 15:41:57 +02:00
relikd
8ae5376d41 fix: most_used_key 2022-04-12 23:11:03 +02:00
relikd
340bc6611b one groupby per build thread + new resolver class 2022-04-11 01:41:17 +02:00
relikd
9dcd704283 move logic to VGroups.iter 2022-04-10 23:01:41 +02:00
relikd
d689a6cdf7 small fixes
- set child default object to field key
- strip whitespace if split
- ignore case for sort order
- setup.py package instead of module
2022-04-10 22:57:46 +02:00
relikd
b05dd31ff0 v0.9.5 2022-04-07 13:33:59 +02:00
relikd
16a26afdce fix data model enumeration with no flow blocks 2022-04-07 01:01:23 +02:00
relikd
c618ee458b v0.9.4 2022-04-06 22:12:06 +02:00
relikd
55916a4519 fix duplicate vobj for same slug 2022-04-06 20:52:53 +02:00
relikd
a694149d04 fix missing getitem 2022-04-06 17:55:27 +02:00
relikd
831cfa4e9c readme: link to relevant files 2022-04-06 17:36:19 +02:00
13 changed files with 413 additions and 354 deletions

View File

@@ -1,21 +1,16 @@
.PHONY: help dist: setup.py lektor_groupby/*
help:
@echo 'commands:'
@echo ' dist'
dist-env:
@echo Creating virtual environment...
@python3 -m venv 'dist-env'
@source dist-env/bin/activate && pip install twine
.PHONY: dist
dist: dist-env
[ -z "$${VIRTUAL_ENV}" ] # you can not do this inside a virtual environment.
rm -rf dist
@echo Building... @echo Building...
python3 setup.py sdist bdist_wheel python3 setup.py sdist bdist_wheel
@echo
rm -rf ./*.egg-info/ ./build/ MANIFEST rm -rf ./*.egg-info/ ./build/ MANIFEST
env-publish:
@echo Creating virtual environment...
@python3 -m venv 'env-publish'
@source env-publish/bin/activate && pip install twine
.PHONY: publish
publish: dist env-publish
[ -z "$${VIRTUAL_ENV}" ] # you can not do this inside a virtual environment.
@echo Publishing... @echo Publishing...
@echo "\033[0;31mEnter your PyPI token:\033[0m" @echo "\033[0;31mEnter PyPI token in password prompt:\033[0m"
@source dist-env/bin/activate && export TWINE_USERNAME='__token__' && twine upload dist/* @source env-publish/bin/activate && export TWINE_USERNAME='__token__' && twine upload dist/*

View File

@@ -2,4 +2,4 @@
name = GroupBy Examples name = GroupBy Examples
[packages] [packages]
lektor-groupby = 0.9.3 lektor-groupby = 0.9.6

View File

@@ -6,12 +6,14 @@ Overview:
- [advanced example](#advanced-example) touches on the potentials of the plugin. - [advanced example](#advanced-example) touches on the potentials of the plugin.
- [Misc](#misc) shows other use-cases. - [Misc](#misc) shows other use-cases.
After reading this tutorial, have a look at other plugins that use `lektor-groupby`:
- [lektor-inlinetags](https://github.com/relikd/lektor-inlinetags-plugin)
## About ## About
To use the groupby plugin you have to add an attribute to your model file. To use the groupby plugin you have to add an attribute to your model file.
In our case you can refer to the `models/page.ini` model: In our case you can refer to the [`models/page.ini`](./models/page.ini) model:
```ini ```ini
[fields.tags] [fields.tags]
@@ -36,10 +38,10 @@ The attribute name is later used for grouping.
## Quick config ## Quick config
Relevant files: Relevant files:
```
configs/groupby.ini - [`configs/groupby.ini`](./configs/groupby.ini)
templates/example-config.html - [`templates/example-config.html`](./templates/example-config.html)
```
The easiest way to add tags to your site is by defining the `groupby.ini` config file. The easiest way to add tags to your site is by defining the `groupby.ini` config file.
@@ -133,10 +135,10 @@ In your template file you have access to the attributes, config, and children (p
## Simple example ## Simple example
Relevant files: Relevant files:
```
packages/simple-example/lektor_simple.py - [`packages/simple-example/lektor_simple.py`](./packages/simple-example/lektor_simple.py)
templates/example-simple.html - [`templates/example-simple.html`](./templates/example-simple.html)
```
```python ```python
def on_groupby_before_build_all(self, groupby, builder, **extra): def on_groupby_before_build_all(self, groupby, builder, **extra):
@@ -204,11 +206,11 @@ The template file can access and display the `extra-info`:
## Advanced example ## Advanced example
Relevant files: Relevant files:
```
configs/advanced.ini - [`configs/advanced.ini`](./configs/advanced.ini)
packages/advanced-example/lektor_advanced.py - [`packages/advanced-example/lektor_advanced.py`](./packages/advanced-example/lektor_advanced.py)
templates/example-advanced.html - [`templates/example-advanced.html`](./templates/example-advanced.html)
```
The following example is similar to the previous one. The following example is similar to the previous one.
Except that it loads a config file and replaces in-text occurrences of `{{Tagname}}` with `<a href="/tag/">Tagname</a>`. Except that it loads a config file and replaces in-text occurrences of `{{Tagname}}` with `<a href="/tag/">Tagname</a>`.
@@ -284,4 +286,4 @@ This can be done in combination with the next use-case:
You can query the groups of any parent node (including those without slug). You can query the groups of any parent node (including those without slug).
The keys (`'TestA', 'TestB'`) can be omitted which will return all groups of all attributes (you can still filter them with `x.config.key == 'TestC'`). The keys (`'TestA', 'TestB'`) can be omitted which will return all groups of all attributes (you can still filter them with `x.config.key == 'TestC'`).
Refer to `templates/page.html` for usage. Refer to [`templates/page.html`](./templates/page.html) for usage.

59
lektor_groupby/backref.py Normal file
View File

@@ -0,0 +1,59 @@
from lektor.context import get_ctx
from typing import TYPE_CHECKING, Iterator
from weakref import WeakSet
if TYPE_CHECKING:
from lektor.builder import Builder
from lektor.db import Record
from .groupby import GroupBy
from .vobj import GroupBySource
class GroupByRef:
@staticmethod
def of(builder: 'Builder') -> 'GroupBy':
''' Get the GroupBy object of a builder. '''
return builder.__groupby # type:ignore[attr-defined,no-any-return]
@staticmethod
def set(builder: 'Builder', groupby: 'GroupBy') -> None:
''' Set the GroupBy object of a builder. '''
builder.__groupby = groupby # type: ignore[attr-defined]
class VGroups:
@staticmethod
def of(record: 'Record') -> WeakSet:
'''
Return the (weak) set of virtual objects of a page.
Creates a new set if it does not exist yet.
'''
try:
wset = record.__vgroups # type: ignore[attr-defined]
except AttributeError:
wset = WeakSet()
record.__vgroups = wset # type: ignore[attr-defined]
return wset # type: ignore[no-any-return]
@staticmethod
def iter(record: 'Record', *keys: str, recursive: bool = False) \
-> Iterator['GroupBySource']:
''' Extract all referencing groupby virtual objects from a page. '''
ctx = get_ctx()
if not ctx:
raise NotImplementedError("Shouldn't happen, where is my context?")
# get GroupBy object
builder = ctx.build_state.builder
groupby = GroupByRef.of(builder)
groupby.make_once(builder) # ensure did cluster before
# manage config dependencies
for dep in groupby.dependencies:
ctx.record_dependency(dep)
# find groups
proc_list = [record]
while proc_list:
page = proc_list.pop(0)
if recursive and hasattr(page, 'children'):
proc_list.extend(page.children) # type: ignore[attr-defined]
for vobj in VGroups.of(page):
if not keys or vobj.config.key in keys:
yield vobj

View File

@@ -1,12 +1,14 @@
from lektor.builder import Builder, PathCache from lektor.builder import PathCache
from lektor.db import Record from lektor.db import Record # isinstance
from lektor.sourceobj import SourceObject from typing import TYPE_CHECKING, Set, List
from lektor.utils import build_url from .config import Config
from typing import Set, Dict, List, Optional, Tuple
from .vobj import GroupBySource
from .config import Config, AnyConfig
from .watcher import Watcher from .watcher import Watcher
if TYPE_CHECKING:
from .config import AnyConfig
from lektor.builder import Builder
from lektor.sourceobj import SourceObject
from .resolver import Resolver
from .vobj import GroupBySource
class GroupBy: class GroupBy:
@@ -16,39 +18,26 @@ class GroupBy:
The grouping is performed only once per build. The grouping is performed only once per build.
''' '''
def __init__(self) -> None: def __init__(self, resolver: 'Resolver') -> None:
self._watcher = [] # type: List[Watcher] self._watcher = [] # type: List[Watcher]
self._results = [] # type: List[GroupBySource] self._results = [] # type: List[GroupBySource]
self._resolver = {} # type: Dict[str, Tuple[str, Config]] self.resolver = resolver
# ---------------- def add_watcher(self, key: str, config: 'AnyConfig') -> Watcher:
# Add observer
# ----------------
def add_watcher(self, key: str, config: AnyConfig) -> Watcher:
''' Init Config and add to watch list. ''' ''' Init Config and add to watch list. '''
w = Watcher(Config.from_any(key, config)) w = Watcher(Config.from_any(key, config))
self._watcher.append(w) self._watcher.append(w)
return w return w
# -----------
# Builder
# -----------
def clear_previous_results(self) -> None:
''' Reset prvious results. Must be called before each build. '''
self._watcher.clear()
self._results.clear()
self._resolver.clear()
def get_dependencies(self) -> Set[str]: def get_dependencies(self) -> Set[str]:
deps = set() # type: Set[str] deps = set() # type: Set[str]
for w in self._watcher: for w in self._watcher:
deps.update(w.config.dependencies) deps.update(w.config.dependencies)
return deps return deps
def make_cluster(self, builder: Builder) -> None: def queue_all(self, builder: 'Builder') -> None:
''' Iterate over all children and perform groupby. ''' ''' Iterate full site-tree and queue all children. '''
self.dependencies = self.get_dependencies()
# remove disabled watchers # remove disabled watchers
self._watcher = [w for w in self._watcher if w.config.enabled] self._watcher = [w for w in self._watcher if w.config.enabled]
if not self._watcher: if not self._watcher:
@@ -60,57 +49,32 @@ class GroupBy:
queue = builder.pad.get_all_roots() # type: List[SourceObject] queue = builder.pad.get_all_roots() # type: List[SourceObject]
while queue: while queue:
record = queue.pop() record = queue.pop()
self.queue_now(record)
if hasattr(record, 'attachments'): if hasattr(record, 'attachments'):
queue.extend(record.attachments) # type: ignore[attr-defined] queue.extend(record.attachments) # type: ignore[attr-defined]
if hasattr(record, 'children'): if hasattr(record, 'children'):
queue.extend(record.children) # type: ignore[attr-defined] queue.extend(record.children) # type: ignore[attr-defined]
# build artifacts if isinstance(record, Record):
for w in self._watcher: for w in self._watcher:
root = builder.pad.get(w.config.root) if w.should_process(record):
for vobj in w.iter_sources(root): w.process(record)
self._results.append(vobj)
if vobj.slug:
self._resolver[vobj.url_path] = (vobj.group, w.config)
self._watcher.clear()
def queue_now(self, node: SourceObject) -> None: def make_once(self, builder: 'Builder') -> None:
''' Process record immediatelly (No-Op if already processed). ''' ''' Perform groupby, iter over sources with watcher callback. '''
if isinstance(node, Record): if self._watcher:
self.resolver.reset()
for w in self._watcher: for w in self._watcher:
if w.should_process(node): root = builder.pad.get(w.config.root)
w.process(node) for vobj in w.iter_sources(root):
self._results.append(vobj)
self.resolver.add(vobj)
self._watcher.clear()
def build_all(self, builder: Builder) -> None: def build_all(self, builder: 'Builder') -> None:
''' Create virtual objects and build sources. ''' ''' Create virtual objects and build sources. '''
self.make_once(builder) # in case no page used the |vgroups filter
path_cache = PathCache(builder.env) path_cache = PathCache(builder.env)
for vobj in self._results: for vobj in self._results:
if vobj.slug: if vobj.slug:
builder.build(vobj, path_cache) builder.build(vobj, path_cache)
del path_cache del path_cache
self._results.clear() # garbage collect weak refs self._results.clear() # garbage collect weak refs
# -----------------
# Path resolver
# -----------------
def resolve_dev_server_path(self, node: SourceObject, pieces: List[str]) \
-> Optional[GroupBySource]:
''' Dev server only: Resolves path/ -> path/index.html '''
if isinstance(node, Record):
rv = self._resolver.get(build_url([node.url_path] + pieces))
if rv:
return GroupBySource(node, group=rv[0], config=rv[1])
return None
def resolve_virtual_path(self, node: SourceObject, pieces: List[str]) \
-> Optional[GroupBySource]:
''' Admin UI only: Prevent server error and null-redirect. '''
if isinstance(node, Record) and len(pieces) >= 2:
path = node['_path'] # type: str
key, grp, *_ = pieces
for group, conf in self._resolver.values():
if key == conf.key and path == conf.root:
if conf.slugify(group) == grp:
return GroupBySource(node, group, conf)
return None

66
lektor_groupby/model.py Normal file
View File

@@ -0,0 +1,66 @@
from lektor.db import Database, Record # typing
from lektor.types.flow import Flow, FlowType
from lektor.utils import bool_from_string
from typing import Set, Dict, Tuple, Any, NamedTuple, Optional, Iterator
class FieldKeyPath(NamedTuple):
fieldKey: str
flowIndex: Optional[int] = None
flowKey: Optional[str] = None
class ModelReader:
'''
Find models and flow-models which contain attribute.
Flows are either returned directly (flatten=False) or
expanded so that each flow-block is yielded (flatten=True)
'''
def __init__(self, db: Database, attr: str, flatten: bool = False) -> None:
self.flatten = flatten
self._flows = {} # type: Dict[str, Set[str]]
self._models = {} # type: Dict[str, Dict[str, str]]
# find flow blocks containing attribute
for key, flow in db.flowblocks.items():
tmp1 = set(f.name for f in flow.fields
if bool_from_string(f.options.get(attr, False)))
if tmp1:
self._flows[key] = tmp1
# find models and flow-blocks containing attribute
for key, model in db.datamodels.items():
tmp2 = {} # Dict[str, str]
for field in model.fields:
if bool_from_string(field.options.get(attr, False)):
tmp2[field.name] = '*' # include all children
elif isinstance(field.type, FlowType) and self._flows:
# only processed if at least one flow has attr
fbs = field.type.flow_blocks
# if fbs == None, all flow-blocks are allowed
if fbs is None or any(x in self._flows for x in fbs):
tmp2[field.name] = '?' # only some flow blocks
if tmp2:
self._models[key] = tmp2
def read(self, record: Record) -> Iterator[Tuple[FieldKeyPath, Any]]:
''' Enumerate all fields of a Record with attrib = True. '''
assert isinstance(record, Record)
for r_key, subs in self._models.get(record.datamodel.id, {}).items():
field = record[r_key]
if not field:
continue
if subs == '*': # either normal field or flow type (all blocks)
if self.flatten and isinstance(field, Flow):
for i, flow in enumerate(field.blocks):
flowtype = flow['_flowblock']
for f_key, block in flow._data.items():
if f_key.startswith('_'): # e.g., _flowblock
continue
yield FieldKeyPath(r_key, i, f_key), block
else:
yield FieldKeyPath(r_key), field
else: # always flow type (only some blocks)
for i, flow in enumerate(field.blocks):
flowtype = flow['_flowblock']
for f_key in self._flows.get(flowtype, []):
yield FieldKeyPath(r_key, i, f_key), flow[f_key]

View File

@@ -1,67 +1,81 @@
from lektor.builder import Builder # typing from lektor.db import Page # isinstance
from lektor.pluginsystem import Plugin # subclass from lektor.pluginsystem import Plugin # subclass
from lektor.sourceobj import SourceObject # typing from typing import TYPE_CHECKING, Iterator, Any
from .backref import GroupByRef, VGroups
from typing import List, Optional, Iterator
from .vobj import GroupBySource, GroupByBuildProgram, VPATH
from .groupby import GroupBy from .groupby import GroupBy
from .pruner import prune from .pruner import prune
from .watcher import GroupByCallbackArgs # typing from .resolver import Resolver
from .vobj import VPATH, GroupBySource, GroupByBuildProgram
if TYPE_CHECKING:
from lektor.builder import Builder, BuildState
from lektor.sourceobj import SourceObject
from .watcher import GroupByCallbackArgs
class GroupByPlugin(Plugin): class GroupByPlugin(Plugin):
name = 'GroupBy Plugin' name = 'GroupBy Plugin'
description = 'Cluster arbitrary records with field attribute keyword.' description = 'Cluster arbitrary records with field attribute keyword.'
def on_setup_env(self, **extra: object) -> None: def on_setup_env(self, **extra: Any) -> None:
self.creator = GroupBy() self.has_changes = False
self.resolver = Resolver(self.env)
self.env.add_build_program(GroupBySource, GroupByBuildProgram) self.env.add_build_program(GroupBySource, GroupByBuildProgram)
self.env.jinja_env.filters.update(vgroups=GroupBySource.of_record) self.env.jinja_env.filters.update(vgroups=VGroups.iter)
# resolve /tag/rss/ -> /tag/rss/index.html (local server only) def on_before_build(
@self.env.urlresolver self, builder: 'Builder', source: 'SourceObject', **extra: Any
def a(node: SourceObject, parts: List[str]) -> Optional[GroupBySource]: ) -> None:
return self.creator.resolve_dev_server_path(node, parts) # before-build may be called before before-build-all (issue #1017)
# make sure it is always evaluated first
if isinstance(source, Page):
self._init_once(builder)
# resolve virtual objects in admin UI def on_after_build(self, build_state: 'BuildState', **extra: Any) -> None:
@self.env.virtualpathresolver(VPATH.lstrip('@')) if build_state.updated_artifacts:
def b(node: SourceObject, parts: List[str]) -> Optional[GroupBySource]: self.has_changes = True
return self.creator.resolve_virtual_path(node, parts)
def _load_quick_config(self) -> None: def on_after_build_all(self, builder: 'Builder', **extra: Any) -> None:
# only rebuild if has changes (bypass idle builds)
# or the very first time after startup (url resolver & pruning)
if self.has_changes or not self.resolver.has_any:
self._init_once(builder).build_all(builder) # updates resolver
self.has_changes = False
def on_after_prune(self, builder: 'Builder', **extra: Any) -> None:
# TODO: find a better way to prune unreferenced elements
prune(builder, VPATH, self.resolver.files)
# ------------
# internal
# ------------
def _init_once(self, builder: 'Builder') -> GroupBy:
try:
return GroupByRef.of(builder)
except AttributeError:
groupby = GroupBy(self.resolver)
GroupByRef.set(builder, groupby)
self._load_quick_config(groupby)
# let other plugins register their @groupby.watch functions
self.emit('before-build-all', groupby=groupby, builder=builder)
groupby.queue_all(builder)
return groupby
def _load_quick_config(self, groupby: GroupBy) -> None:
''' Load config file quick listeners. ''' ''' Load config file quick listeners. '''
config = self.get_config() config = self.get_config()
for key in config.sections(): for key in config.sections():
if '.' in key: # e.g., key.fields and key.key_map if '.' in key: # e.g., key.fields and key.key_map
continue continue
watcher = self.creator.add_watcher(key, config) watcher = groupby.add_watcher(key, config)
split = config.get(key + '.split') # type: str split = config.get(key + '.split') # type: str
@watcher.grouping() @watcher.grouping()
def _fn(args: GroupByCallbackArgs) -> Iterator[str]: def _fn(args: 'GroupByCallbackArgs') -> Iterator[str]:
val = args.field val = args.field
if isinstance(val, str): if isinstance(val, str):
val = val.split(split) if split else [val] # make list val = map(str.strip, val.split(split)) if split else [val]
if isinstance(val, list): if isinstance(val, (list, map)):
yield from val yield from val
def on_before_build_all(self, builder: Builder, **extra: object) -> None:
self.creator.clear_previous_results()
self._load_quick_config()
# let other plugins register their @groupby.watch functions
self.emit('before-build-all', groupby=self.creator, builder=builder)
self.config_dependencies = self.creator.get_dependencies()
self.creator.make_cluster(builder)
def on_before_build(self, source: SourceObject, **extra: object) -> None:
# before-build may be called before before-build-all (issue #1017)
# make sure it is evaluated immediatelly
self.creator.queue_now(source)
def on_after_build_all(self, builder: Builder, **extra: object) -> None:
self.creator.build_all(builder)
def on_after_prune(self, builder: Builder, **extra: object) -> None:
# TODO: find a better way to prune unreferenced elements
prune(builder, VPATH)

View File

@@ -2,29 +2,36 @@
Static collector for build-artifact urls. Static collector for build-artifact urls.
All non-tracked VPATH-urls will be pruned after build. All non-tracked VPATH-urls will be pruned after build.
''' '''
from lektor.builder import Builder # typing
from lektor.reporter import reporter # report_pruned_artifact from lektor.reporter import reporter # report_pruned_artifact
from lektor.utils import prune_file_and_folder from lektor.utils import prune_file_and_folder
from typing import TYPE_CHECKING, Set, Iterable
_cache = set() if TYPE_CHECKING:
# Note: this var is static or otherwise two instances of from lektor.builder import Builder
# this module would prune each others artifacts.
def track_not_prune(url: str) -> None: def _normalize_url_cache(url_cache: Iterable[str]) -> Set[str]:
''' Add url to build cache to prevent pruning. ''' cache = set()
_cache.add(url.lstrip('/')) for url in url_cache:
if url.endswith('/'):
url += 'index.html'
cache.add(url.lstrip('/'))
return cache
def prune(builder: Builder, vpath: str) -> None: def prune(builder: 'Builder', vpath: str, url_cache: Iterable[str]) -> None:
''' Remove previously generated, unreferenced Artifacts. ''' '''
Remove previously generated, unreferenced Artifacts.
All urls in url_cache must have a trailing "/index.html" (instead of "/")
and also, no leading slash, "blog/index.html" instead of "/blog/index.html"
'''
vpath = '@' + vpath.lstrip('@') # just in case of user error vpath = '@' + vpath.lstrip('@') # just in case of user error
dest_path = builder.destination_path dest_path = builder.destination_path
url_cache = _normalize_url_cache(url_cache)
con = builder.connect_to_database() con = builder.connect_to_database()
try: try:
with builder.new_build_state() as build_state: with builder.new_build_state() as build_state:
for url, file in build_state.iter_artifacts(): for url, file in build_state.iter_artifacts():
if url.lstrip('/') in _cache: if url.lstrip('/') in url_cache:
continue # generated in this build-run continue # generated in this build-run
infos = build_state.get_artifact_dependency_infos(url, []) infos = build_state.get_artifact_dependency_infos(url, [])
for artifact_name, _ in infos: for artifact_name, _ in infos:
@@ -36,4 +43,3 @@ def prune(builder: Builder, vpath: str) -> None:
break # there is only one VPATH-entry per source break # there is only one VPATH-entry per source
finally: finally:
con.close() con.close()
_cache.clear()

View File

@@ -0,0 +1,62 @@
from lektor.db import Record # isinstance
from lektor.utils import build_url
from typing import TYPE_CHECKING, Dict, List, Tuple, Optional, Iterable
from .vobj import VPATH, GroupBySource
if TYPE_CHECKING:
from lektor.environment import Environment
from lektor.sourceobj import SourceObject
from .config import Config
class Resolver:
'''
Resolve virtual paths and urls ending in /.
Init will subscribe to @urlresolver and @virtualpathresolver.
'''
def __init__(self, env: 'Environment') -> None:
self._data = {} # type: Dict[str, Tuple[str, Config]]
env.urlresolver(self.resolve_server_path)
env.virtualpathresolver(VPATH.lstrip('@'))(self.resolve_virtual_path)
@property
def has_any(self) -> bool:
return bool(self._data)
@property
def files(self) -> Iterable[str]:
return self._data
def reset(self) -> None:
''' Clear previously recorded virtual objects. '''
self._data.clear()
def add(self, vobj: GroupBySource) -> None:
''' Track new virtual object (only if slug is set). '''
if vobj.slug:
self._data[vobj.url_path] = (vobj.group, vobj.config)
# ------------
# Resolver
# ------------
def resolve_server_path(self, node: 'SourceObject', pieces: List[str]) \
-> Optional[GroupBySource]:
''' Local server only: resolve /tag/rss/ -> /tag/rss/index.html '''
if isinstance(node, Record):
rv = self._data.get(build_url([node.url_path] + pieces))
if rv:
return GroupBySource(node, group=rv[0], config=rv[1])
return None
def resolve_virtual_path(self, node: 'SourceObject', pieces: List[str]) \
-> Optional[GroupBySource]:
''' Admin UI only: Prevent server error and null-redirect. '''
if isinstance(node, Record) and len(pieces) >= 2:
path = node['_path'] # type: str
key, grp, *_ = pieces
for group, conf in self._data.values():
if key == conf.key and path == conf.root:
if conf.slugify(group) == grp:
return GroupBySource(node, group, conf)
return None

View File

@@ -1,5 +1,7 @@
from lektor.reporter import reporter, style from lektor.reporter import reporter, style
from typing import List, Dict
def report_config_error(key: str, field: str, val: str, e: Exception) -> None: def report_config_error(key: str, field: str, val: str, e: Exception) -> None:
''' Send error message to Lektor reporter. Indicate which field is bad. ''' ''' Send error message to Lektor reporter. Indicate which field is bad. '''
@@ -9,3 +11,18 @@ def report_config_error(key: str, field: str, val: str, e: Exception) -> None:
reporter._write_line(style(msg, fg='red')) reporter._write_line(style(msg, fg='red'))
except Exception: except Exception:
print(msg) # fallback in case Lektor API changes print(msg) # fallback in case Lektor API changes
def most_used_key(keys: List[str]) -> str:
if len(keys) < 3:
return keys[0] # TODO: first vs last occurrence
best_count = 0
best_key = ''
tmp = {} # type: Dict[str, int]
for k in keys:
num = (tmp[k] + 1) if k in tmp else 1
tmp[k] = num
if num > best_count: # TODO: (>) vs (>=), first vs last occurrence
best_count = num
best_key = k
return best_key

View File

@@ -1,16 +1,15 @@
from lektor.build_programs import BuildProgram # subclass from lektor.build_programs import BuildProgram # subclass
from lektor.builder import Artifact # typing
from lektor.context import get_ctx from lektor.context import get_ctx
from lektor.db import Record # typing
from lektor.environment import Expression from lektor.environment import Expression
from lektor.sourceobj import VirtualSourceObject # subclass from lektor.sourceobj import VirtualSourceObject # subclass
from lektor.utils import build_url from lektor.utils import build_url
from typing import TYPE_CHECKING, Dict, List, Any, Optional, Iterator
from typing import Dict, List, Any, Optional, Iterator from .backref import VGroups
from weakref import WeakSet
from .config import Config
from .pruner import track_not_prune
from .util import report_config_error from .util import report_config_error
if TYPE_CHECKING:
from lektor.builder import Artifact
from lektor.db import Record
from .config import Config
VPATH = '@groupby' # potentially unsafe. All matching entries are pruned. VPATH = '@groupby' # potentially unsafe. All matching entries are pruned.
@@ -28,15 +27,16 @@ class GroupBySource(VirtualSourceObject):
def __init__( def __init__(
self, self,
record: Record, record: 'Record',
group: str, group: str,
config: Config, config: 'Config',
children: Optional[Dict[Record, List[Any]]] = None, children: Optional[Dict['Record', List[Any]]] = None,
) -> None: ) -> None:
super().__init__(record) super().__init__(record)
self.key = config.slugify(group) self.key = config.slugify(group)
self.group = group self.group = group
self.config = config self.config = config
self._children = children or {} # type: Dict[Record, List[Any]]
# evaluate slug Expression # evaluate slug Expression
if config.slug and '{key}' in config.slug: if config.slug and '{key}' in config.slug:
self.slug = config.slug.replace('{key}', self.key) self.slug = config.slug.replace('{key}', self.key)
@@ -45,16 +45,12 @@ class GroupBySource(VirtualSourceObject):
assert self.slug != Ellipsis, 'invalid config: ' + config.slug assert self.slug != Ellipsis, 'invalid config: ' + config.slug
if self.slug and self.slug.endswith('/index.html'): if self.slug and self.slug.endswith('/index.html'):
self.slug = self.slug[:-10] self.slug = self.slug[:-10]
# make sure children are on the same pad
self._children = {} # type: Dict[Record, List[Any]]
for child, extras in (children or {}).items():
if child.pad != record.pad:
child = record.pad.get(child.path)
self._children[child] = extras
self._reverse_reference_records()
# extra fields # extra fields
for attr, expr in config.fields.items(): for attr, expr in config.fields.items():
setattr(self, attr, self._eval(expr, field='fields.' + attr)) setattr(self, attr, self._eval(expr, field='fields.' + attr))
# back-ref
for child in self._children:
VGroups.of(child).add(self)
def _eval(self, value: Any, *, field: str) -> Any: def _eval(self, value: Any, *, field: str) -> Any:
''' Internal only: evaluates Lektor config file field expression. ''' ''' Internal only: evaluates Lektor config file field expression. '''
@@ -94,12 +90,12 @@ class GroupBySource(VirtualSourceObject):
# ----------------------- # -----------------------
@property @property
def children(self) -> Dict[Record, List[Any]]: def children(self) -> Dict['Record', List[Any]]:
''' Returns dict with page record key and (optional) extra value. ''' ''' Returns dict with page record key and (optional) extra value. '''
return self._children return self._children
@property @property
def first_child(self) -> Optional[Record]: def first_child(self) -> Optional['Record']:
''' Returns first referencing page record. ''' ''' Returns first referencing page record. '''
if self._children: if self._children:
return iter(self._children).__next__() return iter(self._children).__next__()
@@ -114,16 +110,14 @@ class GroupBySource(VirtualSourceObject):
return val[0] if val else None return val[0] if val else None
def __getitem__(self, key: str) -> Any: def __getitem__(self, key: str) -> Any:
# Used for virtual path resolver and |sort(attribute="x") filter # Used for virtual path resolver
if key in ('_path', '_alt'): if key in ('_path', '_alt'):
return getattr(self, key[1:]) return getattr(self, key[1:])
if hasattr(self, key): return self.__missing__(key) # type: ignore[attr-defined]
return getattr(self, key)
return None
def __lt__(self, other: 'GroupBySource') -> bool: def __lt__(self, other: 'GroupBySource') -> bool:
# Used for |sort filter ("group" is the provided original string) # Used for |sort filter ("group" is the provided original string)
return self.group < other.group return self.group.lower() < other.group.lower()
def __eq__(self, other: object) -> bool: def __eq__(self, other: object) -> bool:
# Used for |unique filter # Used for |unique filter
@@ -140,41 +134,6 @@ class GroupBySource(VirtualSourceObject):
return '<GroupBySource path="{}" children={}>'.format( return '<GroupBySource path="{}" children={}>'.format(
self.path, len(self._children)) self.path, len(self._children))
# ---------------------
# Reverse Reference
# ---------------------
def _reverse_reference_records(self) -> None:
''' Attach self to page records. '''
for child in self._children:
if not hasattr(child, '_vgroups'):
child._vgroups = WeakSet() # type: ignore[attr-defined]
child._vgroups.add(self) # type: ignore[attr-defined]
@staticmethod
def of_record(
record: Record,
*keys: str,
recursive: bool = False
) -> Iterator['GroupBySource']:
''' Extract all referencing groupby virtual objects from a page. '''
ctx = get_ctx()
# manage dependencies
if ctx:
for dep in ctx.env.plugins['groupby'].config_dependencies:
ctx.record_dependency(dep)
# find groups
proc_list = [record]
while proc_list:
page = proc_list.pop(0)
if recursive and hasattr(page, 'children'):
proc_list.extend(page.children) # type: ignore[attr-defined]
if not hasattr(page, '_vgroups'):
continue
for vobj in page._vgroups: # type: ignore[attr-defined]
if not keys or vobj.config.key in keys:
yield vobj
# ----------------------------------- # -----------------------------------
# BuildProgram # BuildProgram
@@ -189,9 +148,8 @@ class GroupByBuildProgram(BuildProgram):
url += 'index.html' url += 'index.html'
self.declare_artifact(url, sources=list( self.declare_artifact(url, sources=list(
self.source.iter_source_filenames())) self.source.iter_source_filenames()))
track_not_prune(url)
def build_artifact(self, artifact: Artifact) -> None: def build_artifact(self, artifact: 'Artifact') -> None:
get_ctx().record_virtual_dependency(self.source) get_ctx().record_virtual_dependency(self.source)
artifact.render_template_into( artifact.render_template_into(
self.source.config.template, this=self.source) self.source.config.template, this=self.source)

View File

@@ -1,26 +1,17 @@
from lektor.db import Database, Record # typing from typing import TYPE_CHECKING, Dict, List, Tuple, Any, Union, NamedTuple
from lektor.types.flow import Flow, FlowType from typing import Optional, Callable, Iterator, Generator
from lektor.utils import bool_from_string from .model import ModelReader
from .util import most_used_key
from typing import Set, Dict, List, Tuple, Any, Union, NamedTuple
from typing import Optional, Callable, Iterable, Iterator, Generator
from .vobj import GroupBySource from .vobj import GroupBySource
from .config import Config if TYPE_CHECKING:
from lektor.db import Database, Record
from .config import Config
# ----------------------------------- from .model import FieldKeyPath
# Typing
# -----------------------------------
class FieldKeyPath(NamedTuple):
fieldKey: str
flowIndex: Optional[int] = None
flowKey: Optional[str] = None
class GroupByCallbackArgs(NamedTuple): class GroupByCallbackArgs(NamedTuple):
record: Record record: 'Record'
key: FieldKeyPath key: 'FieldKeyPath'
field: Any # lektor model data-field value field: Any # lektor model data-field value
@@ -30,111 +21,15 @@ GroupingCallback = Callable[[GroupByCallbackArgs], Union[
]] ]]
# -----------------------------------
# ModelReader
# -----------------------------------
class GroupByModelReader:
''' Find models and flow-models which contain attribute '''
def __init__(self, db: Database, attrib: str) -> None:
self._flows = {} # type: Dict[str, Set[str]]
self._models = {} # type: Dict[str, Dict[str, str]]
# find flow blocks containing attribute
for key, flow in db.flowblocks.items():
tmp1 = set(f.name for f in flow.fields
if bool_from_string(f.options.get(attrib, False)))
if tmp1:
self._flows[key] = tmp1
# find models and flow-blocks containing attribute
for key, model in db.datamodels.items():
tmp2 = {} # Dict[str, str]
for field in model.fields:
if bool_from_string(field.options.get(attrib, False)):
tmp2[field.name] = '*' # include all children
elif isinstance(field.type, FlowType) and self._flows:
# only processed if at least one flow has attrib
fbs = field.type.flow_blocks
# if fbs == None, all flow-blocks are allowed
if fbs is None or any(x in self._flows for x in fbs):
tmp2[field.name] = '?' # only some flow blocks
if tmp2:
self._models[key] = tmp2
def read(
self,
record: Record,
flatten: bool = False
) -> Iterator[Tuple[FieldKeyPath, Any]]:
'''
Enumerate all fields of a Record with attrib = True.
Flows are either returned directly (flatten=False) or
expanded so that each flow-block is yielded (flatten=True)
'''
assert isinstance(record, Record)
for r_key, subs in self._models.get(record.datamodel.id, {}).items():
if subs == '*': # either normal field or flow type (all blocks)
field = record[r_key]
if flatten and isinstance(field, Flow):
for i, flow in enumerate(field.blocks):
flowtype = flow['_flowblock']
for f_key, block in flow._data.items():
if f_key.startswith('_'): # e.g., _flowblock
continue
yield FieldKeyPath(r_key, i, f_key), block
else:
yield FieldKeyPath(r_key), field
else: # always flow type (only some blocks)
for i, flow in enumerate(record[r_key].blocks):
flowtype = flow['_flowblock']
for f_key in self._flows.get(flowtype, []):
yield FieldKeyPath(r_key, i, f_key), flow[f_key]
# -----------------------------------
# State
# -----------------------------------
class GroupByState:
''' Store and update a groupby build state. {group: {record: [extras]}} '''
def __init__(self) -> None:
self.state = {} # type: Dict[str, Dict[Record, List[Any]]]
self._processed = set() # type: Set[Record]
def __contains__(self, record: Record) -> bool:
''' Returns True if record was already processed. '''
return record.path in self._processed
def items(self) -> Iterable[Tuple[str, Dict[Record, List[Any]]]]:
''' Iterable with (group, {record: [extras]}) tuples. '''
return self.state.items()
def add(self, record: Record, sub_groups: Dict[str, List[Any]]) -> None:
''' Append groups if not processed already. {group: [extras]} '''
if record.path not in self._processed:
self._processed.add(record.path)
for group, extras in sub_groups.items():
if group in self.state:
self.state[group][record] = extras
else:
self.state[group] = {record: extras}
# -----------------------------------
# Watcher
# -----------------------------------
class Watcher: class Watcher:
''' '''
Callback is called with (Record, FieldKeyPath, field-value). Callback is called with (Record, FieldKeyPath, field-value).
Callback may yield one or more (group, extra-info) tuples. Callback may yield one or more (group, extra-info) tuples.
''' '''
def __init__(self, config: Config) -> None: def __init__(self, config: 'Config') -> None:
self.config = config self.config = config
self.flatten = True self._root = self.config.root
self.callback = None # type: GroupingCallback #type:ignore[assignment]
def grouping(self, flatten: bool = True) \ def grouping(self, flatten: bool = True) \
-> Callable[[GroupingCallback], None]: -> Callable[[GroupingCallback], None]:
@@ -149,50 +44,72 @@ class Watcher:
self.callback = fn self.callback = fn
return _decorator return _decorator
def initialize(self, db: Database) -> None: def initialize(self, db: 'Database') -> None:
''' Reset internal state. You must initialize before each build! ''' ''' Reset internal state. You must initialize before each build! '''
assert callable(self.callback), 'No grouping callback provided.' assert callable(self.callback), 'No grouping callback provided.'
self._root = self.config.root self._model_reader = ModelReader(db, self.config.key, self.flatten)
self._state = GroupByState() self._state = {} # type: Dict[str, Dict[Record, List[Any]]]
self._model_reader = GroupByModelReader(db, attrib=self.config.key) self._group_map = {} # type: Dict[str, List[str]]
def should_process(self, node: Record) -> bool: def should_process(self, node: 'Record') -> bool:
''' Check if record path is being watched. ''' ''' Check if record path is being watched. '''
return node['_path'].startswith(self._root) return node['_path'].startswith(self._root)
def process(self, record: Record) -> None: def process(self, record: 'Record') -> None:
''' '''
Will iterate over all record fields and call the callback method. Will iterate over all record fields and call the callback method.
Each record is guaranteed to be processed only once. Each record is guaranteed to be processed only once.
''' '''
if record in self._state: for key, field in self._model_reader.read(record):
return
tmp = {} # type: Dict[str, List[Any]] # {group: [extras]}
for key, field in self._model_reader.read(record, self.flatten):
_gen = self.callback(GroupByCallbackArgs(record, key, field)) _gen = self.callback(GroupByCallbackArgs(record, key, field))
try: try:
obj = next(_gen) obj = next(_gen)
while True: while True:
if not isinstance(obj, (str, tuple)): if not isinstance(obj, (str, tuple)):
raise TypeError(f'Unsupported groupby yield: {obj}') raise TypeError(f'Unsupported groupby yield: {obj}')
group = obj if isinstance(obj, str) else obj[0] slug = self._persist(record, key, obj)
if group not in tmp:
tmp[group] = []
if isinstance(obj, tuple):
tmp[group].append(obj[1])
# return slugified group key and continue iteration # return slugified group key and continue iteration
if isinstance(_gen, Generator) and not _gen.gi_yieldfrom: if isinstance(_gen, Generator) and not _gen.gi_yieldfrom:
obj = _gen.send(self.config.slugify(group)) obj = _gen.send(slug)
else: else:
obj = next(_gen) obj = next(_gen)
except StopIteration: except StopIteration:
del _gen del _gen
self._state.add(record, tmp)
def iter_sources(self, root: Record) -> Iterator[GroupBySource]: def _persist(
self,
record: 'Record',
key: 'FieldKeyPath',
obj: Union[str, tuple]
) -> str:
''' Update internal state. Return slugified string. '''
group = obj if isinstance(obj, str) else obj[0]
slug = self.config.slugify(group)
# init group-key
if slug not in self._state:
self._state[slug] = {}
self._group_map[slug] = []
# _group_map is later used to find most used group
self._group_map[slug].append(group)
# init group extras
if record not in self._state[slug]:
self._state[slug][record] = []
# append extras (or default value)
if isinstance(obj, tuple):
self._state[slug][record].append(obj[1])
else:
self._state[slug][record].append(key.fieldKey)
return slug
def iter_sources(self, root: 'Record') -> Iterator[GroupBySource]:
''' Prepare and yield GroupBySource elements. ''' ''' Prepare and yield GroupBySource elements. '''
for group, children in self._state.items(): for key, children in self._state.items():
group = most_used_key(self._group_map[key])
yield GroupBySource(root, group, self.config, children=children) yield GroupBySource(root, group, self.config, children=children)
# cleanup. remove this code if you'd like to iter twice
del self._model_reader
del self._state
del self._group_map
def __repr__(self) -> str: def __repr__(self) -> str:
return '<GroupByWatcher key="{}" enabled={} callback={}>'.format( return '<GroupByWatcher key="{}" enabled={} callback={}>'.format(

View File

@@ -5,7 +5,7 @@ with open('README.md') as fp:
setup( setup(
name='lektor-groupby', name='lektor-groupby',
py_modules=['lektor_groupby'], packages=['lektor_groupby'],
entry_points={ entry_points={
'lektor.plugins': [ 'lektor.plugins': [
'groupby = lektor_groupby:GroupByPlugin', 'groupby = lektor_groupby:GroupByPlugin',
@@ -13,7 +13,7 @@ setup(
}, },
author='relikd', author='relikd',
url='https://github.com/relikd/lektor-groupby-plugin', url='https://github.com/relikd/lektor-groupby-plugin',
version='0.9.3', version='0.9.6',
description='Cluster arbitrary records with field attribute keyword.', description='Cluster arbitrary records with field attribute keyword.',
long_description=longdesc, long_description=longdesc,
long_description_content_type="text/markdown", long_description_content_type="text/markdown",
@@ -27,7 +27,6 @@ setup(
'cluster', 'cluster',
], ],
classifiers=[ classifiers=[
'Development Status :: 5 - Production/Stable',
'Environment :: Web Environment', 'Environment :: Web Environment',
'Environment :: Plugins', 'Environment :: Plugins',
'Framework :: Lektor', 'Framework :: Lektor',