The documentation for each metric function, found in the mir_eval section below, contains further usage information.Īlternatively, you can use the evaluator scripts which allow you to run evaluation from the command line, without writing any code. f_measure ( reference_beats, estimated_beats ) trim_beats ( estimated_beats ) # Compute the F-measure metric and store it in f_measure f_measure = mir_eval. trim_beats ( reference_beats ) estimated_beats = mir_eval. load_events ( 'estimated_beats.txt' ) # Crop out beats before 5s, a common preprocessing step reference_beats = mir_eval. load_events ( 'reference_beats.txt' ) estimated_beats = mir_eval. Once you’ve installed mir_eval (see Installing mir_eval), you can import it in your Python code as follows:įrom here, you will typically either load in data and call the evaluate() function from the appropriate submodule like so: If you don’t use Python and want to get started as quickly as possible, you might consider using Anaconda which makes it easy to install a Python environment which can run mir_eval. To install mir_eval using pip, simply runĪlternatively, you can install mir_eval from source by first installing the dependencies and then running The simplest way to install mir_eval is by using pip, which will also install the required dependencies if needed. Ellis, “mir_eval: A Transparent Implementation of Common MIR Metrics”, Proceedings of the 15th International Conference on Music Information Retrieval, 2014. If you use mir_eval in a research project, please cite the following paper: Mir_eval is a Python library which provides a transparent, standaridized, and straightforward way to evaluate Music Information Retrieval systems.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |