Overview
Comment: | Added basic ABC tokenizing via the PLY module |
---|---|
Timelines: | family | ancestors | descendants | both | ply |
Files: | files | file ages | folders |
SHA1: |
729263ecd0a0667808ef21000d72bce8 |
User & Date: | spiffytech@gmail.com on 2010-11-13 08:26:53 |
Other Links: | branch diff | manifest | tags |
Context
2010-11-13
| ||
08:28 | Added basic ABC tokenizing via the PLY module check-in: ff84293301 user: spiffytech@gmail.com tags: ply | |
08:26 | Added basic ABC tokenizing via the PLY module check-in: 729263ecd0 user: spiffytech@gmail.com tags: ply | |
00:12 | Now handles chords check-in: 5aa14570f1 user: spiffytech@gmail.com tags: ply | |
Changes
Modified cfg.py from [43fd8f5bb3] to [df0a6863be].
1 2 3 4 5 6 7 8 9 | #!/usr/bin/env python import os import random import sys import time random.seed(time.time()) def main(): | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | #!/usr/bin/env python import os import random import sys import time random.seed(time.time()) def main(): key = "A" note_grammars = { "u": ["I V V V I I IV u u", "I IV u u", "I VII IV u u" , "e"], "e": [""], } chord_grammars = { "u": ["I IV V IV I u u", "I VII IV u u", "I V IV u u", "e"], "e": [""] } compose_piece(key, note_grammars) compose_piece(key, chord_grammars, chords=True) def compose_piece(key, grammars, chords=False): score = "" while len(score.split()) < 200: score = "u u u" score = generate_score(score, grammars) score = transliterate_score(score, key, chords) score = generate_csound_score(score) print "f1 0 256 10 1 0 3 ; sine wave function table" for line in score: print line |
︙ | ︙ |
Added parse.py version [7bbfef5961].
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | #!/usr/bin/env python from ply import lex tokens = ( "NOTE", "REST", "SHARP", "FLAT", "OCTAVE", "NATURAL", "LENGTH", ) t_NOTE = r"[A-Ga-g]" t_REST = r"z" t_SHARP = r"\^" t_FLAT = r"_" t_NATURAL = r"=" t_OCTAVE = r"'+|,+" def t_LENGTH(t): r"/?\d+" multiplier = float(t.value.strip("/")) if t.value.startswith("/"): multiplier = 1/multiplier t.value = multiplier return t def t_error(t): raise TypeError("Unknown text '%s'" % (t.value,)) t_ignore = " |" lex.lex() lex.input("GFG B'AB,, | g/2fg gab | GFG BAB | d2A AFD") for tok in iter(lex.token, None): print repr(tok.type), repr(tok.value) |
Modified test.sco from [3dafd59328] to [da39a3ee5e].
|
| < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < |