*wltp* gear-shifts calculator
Project description
- Version:
x.x.x
- Home:
- Documentation:
- PyPI:
- Copyright:
2013-2014 European Commission (JRC-IET)
- License:
Calculates the gear-shifts of Light-duty vehicles running the WLTP driving-cycles, according to UNECE’s GTR (Global Technical Regulation) draft.
Introduction
The calculator accepts as input the vehicle’s technical data, along with parameters for modifying the execution of the WLTC cycle, and it then spits-out the gear-shifts of the vehicle, the attained speed-profile, and any warnings. It does not calculate any COsub(2) emissions.
An “execution” or a “run” of an experiment is the processing and augmentation of the input-model to produce the output-model. The process is depicted in the following diagram:
.---------------------. .----------------------------. ; input-model ; ; output-model ; ;---------------------; ;----------------------------; ; +--vehicle ; ___________ ; +---... ; ; +--params ; | | ; +--cycle_run: ; ; +--wltc_data ; ==> | Processor | ==> ; t v_class gear ; ; ; |___________| ; ------------------ ; ; ; ; 00 0.0 1 ; ; ; ; 01 1.3 1 ; ; ; ; 02 5.5 1 ; ; ; ; ... ; '---------------------' '----------------------------.
Install
Requires Python-2.7+ or Python-3.3+ (preferred).
You can install (or upgrade) the project directly from the PyPI repository with command(pip). Notice that option(--pre) is required, since all realeased packages so far were pre-release (-alpha) versions:
$ pip install wltp --pre -U ## Use `pip3` if both python-2 & 3 installed.
$ wltp.py --version ## Check which version installed.
wltp.py 0.0.9-alpha
Alternatively you can build the latest version of the project from the sources, (assuming you have a working installation of git) and install it in development mode with the following series of commands:
$ git clone "https://github.com/ankostis/wltp.git" wltp.git
$ cd wltp.git
$ python setup.py develop ## Use `python3` if you have installed both python-2 & 3.
That way you get the complete source-tree of the project, ready for development (see doc(contribute) section, below):
+--wltp/ ## (package) The python-code of the calculator | +--cycles/ ## (package) The python-code for the WLTC data | +--test/ ## (package) Test-cases and the wltp_db | +--model ## (module) Describes the data for the calculation | +--experiment ## (module) The calculator +--docs/ ## Documentation folder +--devtools/ ## Scripts for preprocessing WLTC data and the wltp_db +--wltp.py ## (script) The cmd-line entry-point script for the calculator +--setup.py ## (script) The entry point for `setuptools`, installing, testing, etc +--requirements.txt ## The installation dependencies. +--README.rst +--CHANGES.rst +--LICENSE.txt
The previous command installed also any dependencies inside the project-folder. If you wish to install them on your system (or virtualenv), enter:
pip install -r requirements.txt
Python usage
Here is a quick-start python REPL (Read-Eval-Print Loop)-example to setup and run an experiment. First run command(python) and try to import the project to check its version:
code-block:
>>> import wltp >>> wltp.__version__ ## Check version once more. '0.0.9-alpha.1' >>> wltp.__file__ ## To check where it was installed. # doctest: +SKIP /usr/local/lib/site-package/wltp-...
If everything works, create the pandas-model that will hold the input-data (strings and numbers) of the experiment. You can assemble the model-tree by the use of:
sequences,
dictionaries,
class(pandas.DataFrame),
class(pandas.Series), and
URI-references to other model-trees.
For instance:
code-block:
>>> from wltp import model >>> from wltp.experiment import Experiment >>> from collections import OrderedDict as odic ## It is handy to preserve keys-order. >>> mdl = odic( ... vehicle = odic( ... unladen_mass = 1430, ... test_mass = 1500, ... v_max = 195, ... p_rated = 100, ... n_rated = 5450, ... n_idle = 950, ... n_min = None, ## Manufacturers my overridde it ... gear_ratios = [120.5, 75, 50, 43, 37, 32], ... resistance_coeffs = [100, 0.5, 0.04], ... ) ... )
For information on the accepted model-data, check its JSON-schema:
code-block:
>>> model.json_dumps(model.model_schema(), indent=2) # doctest: +SKIP { "properties": { "params": { "properties": { "f_n_min_gear2": { "description": "Gear-2 is invalid when N :< f_n_min_gear2 * n_idle.", "type": [ "number", "null" ], "default": 0.9 }, "v_stopped_threshold": { "description": "Velocity (Km/h) under which (<=) to idle gear-shift (Annex 2-3.3, p71).", "type": [ ...
You then have to feed this model-tree to the class(~wltp.experiment.Experiment) constructor. Internally the class(~wltp.pandel.Pandel) resolves URIs, fills-in default values and validates the data based on the project’s pre-defined JSON-schema:
code-block:
>>> processor = Experiment(mdl) ## Fills-in defaults and Validates model.
Assuming validation passes without errors, you can now inspect the defaulted-model before running the experiment:
code-block:
>>> mdl = processor.model ## Returns the validated model with filled-in defaults. >>> sorted(mdl) ## The "defaulted" model now includes the `params` branch. ['params', 'vehicle'] >>> 'full_load_curve' in mdl['vehicle'] ## A default wot was also provided in the `vehicle`. True
Now you can run the experiment:
code-block:
>>> mdl = processor.run() ## Runs experiment and augments the model with results. >>> sorted(mdl) ## Print the top-branches of the "augmented" model. ['cycle_run', 'params', 'vehicle']
To access the time-based cycle-results it is better to use a class(pandas.DataFrame):
code-block:
>>> import pandas as pd >>> df = pd.DataFrame(mdl['cycle_run']); df.index.name = 't' >>> df.shape ## ROWS(time-steps) X COLUMNS. (1801, 11) >>> df.columns Index(['v_class', 'v_target', 'clutch', 'gears_orig', 'gears', 'v_real', 'p_available', 'p_required', 'rpm', 'rpm_norm', 'driveability'], dtype='object') >>> 'Mean engine_speed: %s' % df.rpm.mean() 'Mean engine_speed: 1917.0407829' >>> df.describe() v_class v_target clutch gears_orig gears \ count 1801.000000 1801.000000 1801 1801.000000 1801.000000 mean 46.506718 46.506718 0.0660744 3.794003 3.683509 std 36.119280 36.119280 0.2484811 2.278959 2.278108 ... <BLANKLINE> v_real p_available p_required rpm rpm_norm count 1801.000000 1801.000000 1801.000000 1801.000000 1801.000000 mean 50.356222 28.846639 4.991915 1917.040783 0.214898 std 32.336908 15.833262 12.139823 878.139758 0.195142 ... >>> processor.driveability_report() # doctest: +SKIP ... 12: (a: X-->0) 13: g1: Revolutions too low! 14: g1: Revolutions too low! ... 30: (b2(2): 5-->4) ... 38: (c1: 4-->3) 39: (c1: 4-->3) 40: Rule e or g missed downshift(40: 4-->3) in acceleration? ... 42: Rule e or g missed downshift(42: 3-->2) in acceleration? ...
You can export the cycle-run results in a CSV-file with the following pandas command:
>>> df.to_csv('cycle_run.csv') # doctest: +SKIP
For more examples, download the sources and check the test-cases found under the file(/wltp/test/) folder.
Cmd-line usage
The examples presented so far required to execute multiple commands interactively inside the Python interpreter (REPL). The comand-line usage below still requires the Python environment to be installed, but provides for executing an experiment directly from the OS’s shell (i.e. program(cmd) in windows or program(bash) in POSIX), and in a single command.
The entry-point script is called program(wltp.py), and it must have been placed in your envvar(PATH) during installation. This script can construct a model by reading input-data from multiple files and/or overriding specific single-value items. Conversely, it can output multiple parts of the resulting-model into files.
To get help for this script, use the following commands:
$ wltp.py --help ## to get generic help for cmd-line syntax
$ wltp.py -M /vehicle ## to get help for specific model-paths
and then, assuming vehicle.csv is a CSV file with the vehicle parameters for which you want to override the n_idle only, run the following:
$ wltp.py -v \
-I vehicle.csv file_frmt=SERIES model_path=/params header@=None \
-m /vehicle/n_idle:=850 \
-O cycle.csv model_path=/cycle_run
IPython notebook usage
The list of IPython notebooks for wltp is maintained at the wiki of the project.
Requirements
In order to run them interactively, ensure that the following requirements are satisfied:
A ipython-notebook server >= v2.x.x is installed, up and running.
The wltp is installed on your python-3 of your system (see wltp_install above).
Instructions
Visit each notebook from the wiki-list that you wish to run and download it as file(ipynb) file from the menu (File|Download as...|IPython Notebook(.ipynb)).
Locate the downloaded file with your file-browser and drag n’ drop it on the landing page of your notebook’s server (the one with the folder-list).
Enjoy!
Getting Involved
This project is hosted in github. To provide feedback about bugs and errors or questions and requests for enhancements, use github’s Issue-tracker.
Sources & Dependencies
To get involved with development, first you need to download the latest sources:
$ git clone https://github.com/ankostis/wltp.git wltp.git
$ cd wltp.git
Then you can install all project’s dependencies in `development mode using the file(setup.py) script:
$ python setup.py --help ## Get help for this script.
Common commands: (see '--help-commands' for more)
setup.py build will build the package underneath 'build/'
setup.py install will install the package
Global options:
--verbose (-v) run verbosely (default)
--quiet (-q) run quietly (turns verbosity off)
--dry-run (-n) don't actually do anything
...
$ python setup.py develop ## Also installs dependencies into project's folder.
$ python setup.py build ## Check that the project indeed builds ok.
You should now run the test-cases (see Tests & Metrics, below) to check that the sources are in good shape:
$ python setup.py test
Note
The above commands installed the dependencies inside the project folder and for the virtual-environment. That is why all build and testing actions have to go through python setup.py {some_cmd}.
If you are dealing with installation problems and/or you want to permantly install dependant packages, you have to deactivate the virtual-environment and start installing them into your base python environment:
$ deactivate
$ python setup.py develop
or even try the more permanent installation-mode:
$ python setup.py install # May require admin-rights
Development procedure
For submitting code, use UTF-8 everywhere, unix-eol(LF) and set git --config core.autocrlf = input.
The typical development procedure is like this:
Modify the sources in small, isolated and well-defined changes, i.e. adding a single feature, or fixing a specific bug.
Add test-cases “proving” your code.
Rerun all test-cases to ensure that you didn’t break anything, and check their coverage remain above 80%:
$ python setup.py nosetests --with-coverage --cover-package wltp.model,wltp.experiment --cover-min-percentage=80
Tip
You can enter just: python setup.py test_all instead of the above cmd-line since it has been aliased in the file(setup.cfg) file. Check this file for more example commands to use during development.
If you made a rather important modification, update also the doc(CHANGES) file and/or other documents (i.e. README.rst). To see the rendered results of the documents, issue the following commands and read the result html at file(build/sphinx/html/index.html):
$ python setup.py build_sphinx # Builds html docs $ python setup.py build_sphinx -b doctest # Checks if python-code embeded in comments runs ok.
If there are no problems, commit your changes with a descriptive message.
Repeat this cycle for other bugs/enhancements.
When you are finished, push the changes upstream to github and make a merge_request. You can check whether your merge-request indeed passed the tests by checking its build-status on the integration-server’s site (TravisCI).
Hint
Skim through the small IPython developer’s documentantion on the matter: The perfect pull request
Tests & Metrics
In order to maintain the algorithm stable, a lot of effort has been put to setup a series of test-case and metrics to check the sanity of the results and to compare them with the Heinz-db tool or other datasets. These tests can be found in the file(wltp/test/) folders. Code for generating diagrams for the metrics below are located in the file(docs/pyplot/) folder.
Specs & Algorithm
This program was implemented from scratch based on this download(GTR specification <23.10.2013 ECE-TRANS-WP29-GRPE-2013-13 0930.docx>) (included in the file(docs/) folder). The latest version of this GTR, along with other related documents can be found at UNECE’s site:
http://www.unece.org/trans/main/wp29/wp29wgs/wp29grpe/grpedoc_2013.html
https://www2.unece.org/wiki/pages/viewpage.action?pageId=2523179
Probably a more comprehensible but older spec is this one: https://www2.unece.org/wiki/display/trans/DHC+draft+technical+report
The WLTC-profiles for the various classes in the file(devtools/data/cycles/) folder were generated from the tables of the specs above using the file(devtools/csvcolumns8to2.py) script, but it still requires an intermediate manual step involving a spreadsheet to copy the table into ands save them as CSV.
Then use the file(devtools/buildwltcclass.py) to contruct the respective python-vars into the mod(wltp/model.py) sources.
Data-files generated from Steven Heinz’s ms-access vehicle info db-table can be processed with the file(devtools/preprocheinz.py) script.
Cycles
Development team
- Author:
Kostis Anagnostopoulos
- Contributing Authors:
Heinz Steven (test-data, validation and review)
Georgios Fontaras (simulation, physics & engineering support)
Alessandro Marotta (policy support)
Glossary
rubric:
WLTP The `Worldwide harmonised Light duty vehicles Test Procedure <https://www2.unece.org/wiki/pages/viewpage.action?pageId=2523179>`_, a **GRPE** informal working group UNECE The United Nations Economic Commission for Europe, which has assumed the steering role on the **WLTP**. GRPE **UNECE** Working party on Pollution and Energy - Transport Programme GS Task-Force The Gear-shift Task-force of the **GRPE**. It is the team of automotive experts drafting the gear-shifting strategy for vehicles running the **WLTP** cycles. WLTC The family of pre-defined *driving-cycles* corresponding to vehicles with different PMR (Power to Mass Ratio). Classes 1,2, 3a & 3b are split in 2, 4, 4 and 4 *parts* respectively. Unladen mass *UM* or *Curb weight*, the weight of the vehicle in running order minus the mass of the driver. Test mass *TM*, the representative weight of the vehicle used as input for the calculations of the simulation, derived by interpolating between high and low values for the |CO2|-family of the vehicle. Downscaling Reduction of the top-velocity of the original drive trace to be followed, to ensure that the vehicle is not driven in an unduly high proportion of "full throttle". pandas-model The *container* of data that the gear-shift calculator consumes and produces. It is implemented by class(``wltp.pandel.Pandel``) as a mergeable stack of **JSON-schema** abiding trees of strings and numbers, formed with sequences, dictionaries, mod(``pandas``)-instances and URI-references. JSON-schema The `JSON schema <http://json-schema.org/>`_ is an `IETF draft <http://tools.ietf.org/html/draft-zyp-json-schema-03>`_ that provides a *contract* for what JSON-data is required for a given application and how to interact with it. JSON Schema is intended to define validation, documentation, hyperlink navigation, and interaction control of JSON data. You can learn more about it from this `excellent guide <http://spacetelescope.github.io/understanding-json-schema/>`_, and experiment with this `on-line validator <http://www.jsonschema.net/>`_. JSON-pointer JSON Pointer(rfc(``6901``)) defines a string syntax for identifying a specific value within a JavaScript Object Notation (JSON) document. It aims to serve the same purpose as *XPath* from the XML world, but it is much simpler.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.