All notable changes to this project will be documented (for humans) in this file.
The format is based on Keep a Changelog and this project adheres to Semantic Versioning.
A "heartbeat" release after long time without signs of life.
- Support for TACC/launcher use with SLURM and PBS
- "Support" for codespell with typos fixes and GitHub CI to keep it typo free
- Tutorial documentation with hello-world and basic datalad-pair examples
- Black code formatter configuration and CI integration
- Documentation build test in CI
- Support for DNF package manager on RedHat-based systems
- Switched to GitHub Actions from Travis for CI
- Switched to use
datalad pushinstead of deprecateddatalad publish - Removed support for Python before 3.8
- Documentation improvements and reorganization
- Updated dependencies: removed pycrypto, unpinned pyOpenSSL
- Switched to pytest-cov for test coverage
- Account for git's init.defaultBranch in tests
- Invalid escapes in Python strings
- job_templates: launcher subjob output is now stored in expected files
- Debian tracer version regexp handling
- Tests compatibility with newer Ubuntu images
- Singularity detection and test handling
- Condor and SLURM test configurations
Just a quick re-release for PyPI ensuring (manually) that setuptools_scm is present
in the environment while running sdist so all auxililary files such as setup_tools.py
are included in the distribution.
- "A typical workflow" section in README.md based on AWS EC2 HTCondor cluster
- EC2: easier on user handling of keys, use NITRC-CE AMI by default
- EC2: ability to login to a ec2-condor cluster
Minor feature and bug fix release
- use of etelemetry
- initial support for LSF submitted
- testing of venv tracking
- setting of
DATALAD_SSH_IDENTITYFILEearlier (in__init__) for DataLad orchestrator
Feature and bug fix release after a long release silence.
run- support for SLURM submitterfull-except-dataladsetuptools installation "extra_requires" schemeAwsCondorresource type to assist in establishing a simple HPC cluster with condor as ubmitted in AWS S3put,getof implemented resources got a recursive mode
run- switch to use use GNU parallel instead of the one from moreutils for local execution- fixed up/improved documentation (in particular for
run) - use
docker>=3instead ofdocker-py
- a wide variety of fixes
- python 3.5 support
Yarik needed to do a quick release to absorb changes to run
functionality.
Major rename - a NICEMAN grows into a ReproMan. Too many changes to summarize
reproman runto execute computation on local or remote resource, with possibility to submit computation to PBS and Condor.
Largely bugfixes and small enhancements. Major work is ongoing in PRs to provide new functionality (such as remote execution and environment comparisons)
- Tracing RPM-based (RedHat, CentOS) environments
- Tracing Singularity images
- A variety of fixes and enhances in tracing details of git, conda, etc resources.
- interactive ssh sessions fixes through use of
fabricmodule instead of custom code
- Refactored handling of resource parameters to avoid code duplication/boiler plate
Enhancement and fixes primarily targeting better tracing (collecting information about) of the computational components
- tracing of
- docker images
diffcommand to provide summary of differences between two specs- conda environments could be regenerated from the environments
- relative paths could be provided to the
retracecommand
- tracing of Debian packages and Git repositories should be more robust to directories
- handling of older
condaenvironments
Minor release with a few fixes and performance enhancements
- Create apt .sources files pointing to APT snapshot repositories
- Batch commands invocations in Debian tracer to significantly speed up retracing
- Output of the (re)traced spec into a file
A minor release to demonstrate retrace functionality
Just a template for future records:
TODO Summary