- File an issue.
- Explain why you want the feature. How does it help you? What for do you want the feature?
- Fork Dredd.
- Create a feature branch.
- Write tests.
- Write code.
- Lint what you created:
npm run lint
- Send a Pull Request.
- Make sure test coverage didn’t drop and all CI builds are passing.
Semantic Release and Conventional Changelog¶
Releasing of new Dredd versions to npm is automatically managed by Semantic Release. Semantic Release makes sure correct version numbers get bumped according to the meaning of your changes once your PR gets merged to
To make it work, it’s necessary to follow Conventional Changelog. That basically means all commit messages in the project should follow a particular format:
feat- New functionality added
fix- Broken functionality fixed
perf- Performance improved
docs- Documentation added/removed/improved/…
chore- Package setup, CI setup, …
refactor- Changes in code, but no changes in behavior
test- Tests added/removed/improved/…
In the rare cases when your changes break backwards compatibility, the message must include string
BREAKING CHANGE:. That will result in bumping the major version.
Handbook for Contributors and Maintainers¶
Tests need pre-compiled every time because some integration tests use code linked from
lib. This is certainly a flaw and it slows down day-to-day development, but until we streamline our build pipeline, the
lib dependency is necessary.
Also mind that CoffeeScript is production dependency (not dev dependency), because it’s needed for running user-provided hooks written in CoffeeScript.
Dredd depends on the drafter-npm package. That’s the reason why you can see
node-gyp errors and failures during the installation process, even though when it’s done, Dredd seems to normally work and correctly parses API Blueprint documents.
$ npm install -g dredd --no-optional
Troubleshooting the compilation¶
If you need the performance of the C++11 parser, but you are struggling to get it installed, it’s usually because of the following problems:
Supported Node.js Versions¶
Given the table with LTS schedule, only versions marked as Maintenance or Active are supported, until their Maintenance End. The testing matrix of Dredd’s CI builds must contain all currently supported versions and must not contain any unsupported versions. The same applies for the underlying libraries, such as Dredd Transactions or Gavel.js.
In following files the latest supported Node.js version should be used:
appveyor.yml- Windows CI builds
docs/install-node.sh- ReadTheDocs docs builds
Dependencies should not be specified in a loose way - only exact versions are allowed. Any changes to dependencies (version upgrades included) must be approved by Oracle before merged to
master. Dredd maintainers take care of the approval. For transparency, PRs with pending dependency approval are labeled respectively.
The internal Oracle policies about dependencies pay attention mainly to licenses. Before adding a new dependency or upgrading an existing one try to make sure the project and all its transitive dependencies feature standard permissive licenses, including correct copyright holders and license texts.
Dredd follows Semantic Versioning. To ensure certain stability of Dredd installations (e.g. in CI builds), users can pin their version. They can also use release tags:
npm install dredd- Installs the latest published version including experimental pre-release versions.
npm install dredd@stable- Skips experimental pre-release versions.
When releasing, make sure you respect the tagging:
- To release pre-release, e.g.
42.1.0-pre.7, use just
- To release any other version, e.g.
npm publish && npm dist-tag add email@example.com stable.
Releasing process for standard versions is currently automated by Semantic Release. Releasing process for pre-releases is not automated and needs to be done manually, ideally from a special git branch.
npm test to run all tests. Dredd uses Mocha as a test framework. It’s default options are in the
Dredd is tested on the AppVeyor, a Windows-based CI. There are still several known limitations when using Dredd on Windows, but the intention is to support it without any compromises. Any help with fixing problems on Windows is greatly appreciated!
Linter is optional for local development to make easy prototyping and work with unpolished code, but it’s enforced on CI level. It is recommended you integrate eslint with your favorite editor so you see violations immediately during coding.
Source of the documentation can be found in the docs directory. To render Dredd’s documentation on your computer, you need Python 3 and Node.js installed.
Installation and Development¶
- Make sure
nodeis an executable and
npm installhas been done for the Dredd directory. Extensions to the docs are written in Node.js and Sphinx needs to have a way to execute them.
- Get Python 3. On macOS, run
brew install python3. ReadTheDocs build the docs with Python 3.5, so make sure you have that or higher.
- Create a virtual environment and activate it:
python3 -m venv ./venv . ./env/bin/activate
- Install dependencies for the docs:
pip install -r docs/requirements.txt
Once installed, you may use following commands:
npm run docs:build- Builds the documentation
npm run docs:serve- Runs live preview of the documentation on
Installation on ReadTheDocs¶
The final documentation gets deployed on the ReadTheDocs. The service, however, does not support Node.js. Therefore on ReadTheDocs, the
conf.py configuration file for Sphinx runs
docs/install-node.sh, which installs Node.js locally, using nvm.
ToC and Markdown¶
Traditionally, Sphinx only supported the reStructuredText format. Thanks to the recommonmark project it’s possible to use also Markdown, almost as a format native to Sphinx. Dredd’s docs are using the AutoStructify extension to be able to specify toctree and other stuff specific to reStructuredText. The ToC is generated from the Contents section in the
There are some extensions hooked into the build process of Sphinx, modifying how the documents are processed. They’re written in Node.js, because:
- It’s better to have them in the same language as Dredd.
- This way they’re able to import source files (e.g.
By default, Hercule is attached as an extension, which means you can use the
:[Title](link.md) syntax for including other Markdown files. All other extensions are custom and are automatically loaded from the
The extension is expected to be a
.coffee script file, which takes
docname as an argument, reads the Markdown document from
stdin, modifies it, and then prints it to
stdout. When in need of templating, extensions are expected to use the bundled
ect templating engine.
Currently the recommonmark project has still some limitations in how references to local files work. That’s why Dredd’s docs have a custom implementation, which also checks whether the destination exists and fails the build in case of broken link. You can use following syntax:
[Title](link.md)to link to other documents
[Title](link.md#section)to link to sections of other documents
id HTML attributes generated for headings or manual
<a name="section"></a> anchors are considered as valid targets. While this feels very natural for a seasoned writer of Markdown, mind that it is much more error prone then reStructuredText’s references.
Redirects are documented in the
docs/redirects.yml file. They need to be manually set in the ReadTheDocs administration. It’s up to Dredd maintainers to keep the list in sync with reality.
You can use the rtd-redirects tool to programmatically upload the redirects from
docs/redirects.yml to ReadTheDocs admin interface.
Dredd strives for as much test coverage as possible. Coveralls help us to monitor how successful we are in achieving the goal. If a Pull Request introduces drop in coverage, it won’t be accepted unless the author or reviewer provides a good reason why an exception should be made.
The Travis CI build uses following commands to deliver coverage reports:
npm run test:coverage- Tests Dredd and creates the
npm run coveralls- Uploads the
./coverage/lcov.infofile to Coveralls
The first mentioned command goes like this:
- We run the tests on the instrumented code using Mocha with a special lcov reporter, which gives us information about which lines were executed in a standard lcov format.
- Because some integration tests execute the
bin/dreddscript in a subprocess, we collect the coverage stats also in this file. The results are appended to a dedicated lcov file.
- All lcov files are then merged into one using lcov-result-merger and sent to Coveralls.
- Hand-made combined Mocha reporter is used to achieve running tests and collecting coverage at the same time.
- Both Dredd code and the combined reporter decide whether to collect coverage or not according to contents of the
COVERAGE_DIRenvironment variable, which sets the directory for temporary LCOV files created during coverage collection. (If set, collecting takes place.)
Hacking Apiary Reporter¶
If you want to build something on top of the Apiary Reporter, note that it uses a public API described in following documents:
Following data are sent over the wire to Apiary:
There is also one environment variable you could find useful:
APIARY_API_URL='https://api.apiary.io'- Allows to override host of the Apiary Tests API.
- When using long CLI options in tests or documentation, please always use the notation with
=wherever possible. For example, use
--path /dev/null. While both should work, the version with
=feels more like standard GNU-style long options and it makes arrays of arguments for
127.0.0.1(in code, tests, documentation) is preferred over
- Prefer explicit
<br>tags instead of two spaces at the end of the line when writing documentation in Markdown.