issue143:c_c
Différences
Ci-dessous, les différences entre deux révisions de la page.
| Prochaine révision | Révision précédente | ||
| issue143:c_c [2019/03/31 15:27] – créée auntiee | issue143:c_c [2019/04/10 10:24] (Version actuelle) – auntiee | ||
|---|---|---|---|
| Ligne 1: | Ligne 1: | ||
| - | Addition to last month | + | **Addition to last month |
| After last month’s article came out, I got an email from Ian, who suggested his preferred tool for Markdown to HTML conversion, Remarkable (see the Further Reading section for a link). So, for any readers who are looking for something more like that, you now have a starting place! | After last month’s article came out, I got an email from Ian, who suggested his preferred tool for Markdown to HTML conversion, Remarkable (see the Further Reading section for a link). So, for any readers who are looking for something more like that, you now have a starting place! | ||
| Ligne 11: | Ligne 11: | ||
| The script automatically creates the PDF filename using the original markdown filename, and adds a title to the PDF to avoid the warning/ | The script automatically creates the PDF filename using the original markdown filename, and adds a title to the PDF to avoid the warning/ | ||
| - | And now back to your regularly scheduled programming… | + | And now back to your regularly scheduled programming…** |
| - | This month, one of the items on my to-do list was to organize my SGF (go game records) files into a format where I can, at a glance, see whether I won or lost, and when I played it. Originally, I had hoped that each file would have the date stored in the SGF information, | + | Complément au mois précédent |
| - | Do note, I am condensing the entire process for the sake of this article. My goal is to instill the TDD mindset on my readers, while offering some examples. The full code will be linked at the end of the article, for anyone who wants to pick it apart. | + | Après la sortie de l'article |
| - | First Step | + | De plus, j'ai créé/mis à jour mon script bash pour pandoc (md2pdf), qui ressemble à ce qui est montré ci-dessus. |
| - | The first step was to decide which format to start with - I settled on the Fox Go Server format, as the information was on one line, and should therefore be the least amount of processing to get the information into Python. | + | Si vous voulez utiliser vous-même ce script, assurez-vous que le chemin vers le fichier tufte-css est correct pour votre système (voyez l' |
| - | Second Step | + | md2pdf Notes-To-Convert.md |
| + | |||
| + | Le script crée automatiquement le nom du fichier PDF en utilisant celui du fichier markdown d' | ||
| + | |||
| + | Et maintenant, retournons à notre programmation prévue normalement... | ||
| + | |||
| + | **This month, one of the items on my to-do list was to organize my SGF (go game records) files into a format where I can, at a glance, see whether I won or lost, and when I played it. Originally, I had hoped that each file would have the date stored in the SGF information, | ||
| + | |||
| + | Do note, I am condensing the entire process for the sake of this article. My goal is to instill the TDD mindset on my readers, while offering some examples. The full code will be linked at the end of the article, for anyone who wants to pick it apart.** | ||
| + | |||
| + | Ce mois-ci, un des sujets de ma liste des choses à faire était de réorganiser mes fichiers SGF (enregistrements de jeux de go) dans un format dans lequel, d'un coup d' | ||
| + | |||
| + | Notez bien que j'ai résumé l' | ||
| + | |||
| + | **First Step | ||
| + | |||
| + | The first step was to decide which format to start with - I settled on the Fox Go Server format, as the information was on one line, and should therefore be the least amount of processing to get the information into Python.** | ||
| + | |||
| + | Première étape | ||
| + | |||
| + | La première étape a été de décider avec quel format commencer - je me suis fixé sur le format Fox Go Server, car les informations étaient placées sur une ligne, et, de ce fait, devaient représenter le moindre travail pour transférer l' | ||
| + | |||
| + | **Second Step | ||
| Once I had decided what to tackle first, I then set up my folder structure like this: | Once I had decided what to tackle first, I then set up my folder structure like this: | ||
| Ligne 30: | Ligne 52: | ||
| main.py | main.py | ||
| - | The main.py file I originally added after finishing the SGF class and the tests, but it won’t hurt anything to have the file ready from the beginning. Also, __init__.py is empty, but seems to be required for relative imports to work. | + | The main.py file I originally added after finishing the SGF class and the tests, but it won’t hurt anything to have the file ready from the beginning. Also, __init__.py is empty, but seems to be required for relative imports to work.** |
| - | Third Step - Tests | + | Seconde étape |
| + | |||
| + | Une fois que j'eus décidé ce que j' | ||
| + | |||
| + | sgf.py | ||
| + | __init__.py | ||
| + | _tests.py | ||
| + | main.py | ||
| + | |||
| + | J' | ||
| + | |||
| + | **Third Step - Tests | ||
| Now for the first file - tests. Following the practices of TDD (and Adam Wathan’s method), I started with my tests instead of any actual code. | Now for the first file - tests. Following the practices of TDD (and Adam Wathan’s method), I started with my tests instead of any actual code. | ||
| Ligne 48: | Ligne 81: | ||
| self.assertEqual(testItem.getTitle(), | self.assertEqual(testItem.getTitle(), | ||
| + | if __name__ == ' | ||
| + | unittest.main() | ||
| + | |||
| + | I left it at that, knowing the test would fail. I was also getting warnings and errors from Visual Studio Code about the class not existing before running anything. As such, I skipped running the test and instead worked using the warnings from Code. If, however, this is your first TDD project, I recommend getting in the habit of running the tests at every stage and dealing with the errors as they appear.** | ||
| + | |||
| + | Troisième étape - Les tests | ||
| + | |||
| + | Maintenant, pour le premier fichier - tests. En suivant les pratiques de TDD (et la méthode d'Adam Wathan), j'ai commencé avec mes tests plutôt qu' | ||
| + | |||
| + | Le fichier _tests.py du début ressemblait à ceci : | ||
| + | |||
| + | import unittest | ||
| + | |||
| + | from sgf import SGF | ||
| + | |||
| + | class SGFItemTests(unittest.TestCase): | ||
| + | sgfPath = " | ||
| + | def test_load_singleLine_sgf(self): | ||
| + | testItem = SGF(self.sgfPath) | ||
| + | self.assertEqual(testItem.getTitle(), | ||
| if __name__ == ' | if __name__ == ' | ||
| unittest.main() | unittest.main() | ||
| + | | ||
| + | Je l'ai laissé ainsi, sachant que le test échouerait. J'ai eu aussi des avertissements et des erreurs de Visual Studio Code à propos de la classe qui n' | ||
| - | I left it at that, knowing the test would fail. I was also getting warnings and errors from Visual Studio Code about the class not existing before running anything. As such, I skipped running the test and instead worked using the warnings from Code. If, however, this is your first TDD project, I recommend getting in the habit of running the tests at every stage and dealing with the errors as they appear. | ||
| - | Fourth Step - Actual Development | + | **Fourth Step - Actual Development |
| sgf.py | sgf.py | ||
| Ligne 63: | Ligne 117: | ||
| self.title = “created” | self.title = “created” | ||
| - | All I did here was make sure I could import the python file and that it had a constructor. I then began running the tests, and fixing each error as it occurred. First it required me to create a getTitle() function, then I expanded the constructor to loop through the file path and pass each line through to a createTitle function that checked for the existence of specific data (such as PB[], PW[], date[], WR[], BR[], and RE[]). Those fields are player (black), player (white), the date, the players’ ranks, and the result. | + | All I did here was make sure I could import the python file and that it had a constructor. I then began running the tests, and fixing each error as it occurred. First it required me to create a getTitle() function, then I expanded the constructor to loop through the file path and pass each line through to a createTitle function that checked for the existence of specific data (such as PB[], PW[], date[], WR[], BR[], and RE[]). Those fields are player (black), player (white), the date, the players’ ranks, and the result.** |
| - | Admittedly, I stretched those steps out slowly - first I tried to grab the player names and had my test written for that, and so on, evolving both the class and my tests. For the sake of this article, I’m condensing some steps. | + | Quatrième étape - Le vrai développement |
| + | |||
| + | sgf.py | ||
| + | import re #ceci est nécessaire dans le futur pour le code des regex (expression régulière) | ||
| + | |||
| + | class SGF: | ||
| + | def __init__(self, | ||
| + | self.title = “created” | ||
| + | |||
| + | Tout ce que j'ai fait ici, c'est de m' | ||
| + | |||
| + | **Admittedly, I stretched those steps out slowly - first I tried to grab the player names and had my test written for that, and so on, evolving both the class and my tests. For the sake of this article, I’m condensing some steps. | ||
| The regex I used was as follows: | The regex I used was as follows: | ||
| + | |||
| + | name = re.search(' | ||
| + | |||
| + | if name: | ||
| + | | ||
| + | |||
| + | Certes, j'ai déterminé ces étapes lentement : d' | ||
| + | |||
| + | La regex que j'ai utilisée était ainsi : | ||
| name = re.search(' | name = re.search(' | ||
| Ligne 74: | Ligne 148: | ||
| | | ||
| - | The important part of this code are the normal brackets “()”, which creates a group of all the characters between the square brackets (which are the values I’m after). The name.group(1) line simply loads the saved group into a string. | + | **The important part of this code are the normal brackets “()”, which creates a group of all the characters between the square brackets (which are the values I’m after). The name.group(1) line simply loads the saved group into a string. |
| - | I changed the value I was looking for, but the basic framework remained the same. As you can see, I started saving dictionaries for the various values to make the code more readable. Essentially the entire class became a series of functions to strip out corresponding information (player information, | + | I changed the value I was looking for, but the basic framework remained the same. As you can see, I started saving dictionaries for the various values to make the code more readable. Essentially the entire class became a series of functions to strip out corresponding information (player information, |
| - | Fifth Step - Next Test | + | La partie importante du code sont les parenthèses « () » qui créent un groupe de tous les caractères entre crochets (qui sont les valeurs que je cherche). Le ligne de name.group(1) charge simplement le groupe sauvegardé dans une chaîne de caractères. |
| + | |||
| + | J'ai changé la valeur que je cherchais, mais le cadre de base est resté le même. Comme vous pouvez le voir, j'ai commencé à sauvegarder les dictionnaires des différentes valeurs pour rendre le code plus lisible. En gros, la classe entière devint une série de fonctions pour extraire l' | ||
| + | |||
| + | **Fifth Step - Next Test | ||
| The entire above step was dedicated to having my test “test_load_singleLine_sgf” pass successfully. The reason I did it this way was as a proof of concept, and to refine the various functions for parsing the data. This means that all I had left to do was upgrade my file parsing function to not fail when all the metadata isn’t on one line. It doesn’t matter if there are extra items, as the regex will pick out only what I’m looking for. I then created a new test called “test_load_multiLine_sgf”, | The entire above step was dedicated to having my test “test_load_singleLine_sgf” pass successfully. The reason I did it this way was as a proof of concept, and to refine the various functions for parsing the data. This means that all I had left to do was upgrade my file parsing function to not fail when all the metadata isn’t on one line. It doesn’t matter if there are extra items, as the regex will pick out only what I’m looking for. I then created a new test called “test_load_multiLine_sgf”, | ||
| - | The first goal was to again load the player data properly (both black and white), which required me to devise a check for whether or not the metadata was over multiple lines. I opened up an online regex tester, put in some test data, and experimented a bit until I found a regex that seemed to work. | + | The first goal was to again load the player data properly (both black and white), which required me to devise a check for whether or not the metadata was over multiple lines. I opened up an online regex tester, put in some test data, and experimented a bit until I found a regex that seemed to work.** |
| + | |||
| + | Cinquième étape - Test suivant | ||
| + | |||
| + | L' | ||
| + | |||
| + | Le premier but était à nouveau de charger les données des joueurs proprement (les noirs comme les blancs), ce qui a nécessité que j' | ||
| - | The entire checkMultiline function ended up looking like this: | + | **The entire checkMultiline function ended up looking like this: |
| def checkMultiline(self, | def checkMultiline(self, | ||
| Ligne 93: | Ligne 177: | ||
| | | ||
| - | What the regex does is to search for any characters (upper or lowercase) that precede a square bracket, some characters, a closing square bracket, and a newline. I wasn’t too worried about only matching exactly the metadata lines, as I never read the entire file (I break out of the loop once I find all the information I need), and the secondary regex will not be affected. The check is used in my readSGF function, and every line that matches the multiline check is then strung together into a single string (without newlines), which is passed through to the various functions. | + | What the regex does is to search for any characters (upper or lowercase) that precede a square bracket, some characters, a closing square bracket, and a newline. I wasn’t too worried about only matching exactly the metadata lines, as I never read the entire file (I break out of the loop once I find all the information I need), and the secondary regex will not be affected. The check is used in my readSGF function, and every line that matches the multiline check is then strung together into a single string (without newlines), which is passed through to the various functions.** |
| - | This worked fine for OGS (except reviews) files, and then I tested it on Pandanet (IGS) files, where it promptly broke. The reason it broke was simple - Pandanet added a Copyright value into the metadata, and spread it over 4 or 5 lines (depending on where the SGF was created). I put Pandanet in a separate test, and focused only on that test. Running a single test in Python is as simple as: | + | La fonction complète checkMultiline a fini par ressembler à ceci : |
| + | |||
| + | def checkMultiline(self, | ||
| + | multiline = re.search(' | ||
| + | if multiline: | ||
| + | | ||
| + | else: | ||
| + | | ||
| + | |||
| + | Ce que fait la regex est de chercher tous les caractères (en majuscules ou en minuscules) qui précèdent le crochet, certains caractères, | ||
| + | |||
| + | **This worked fine for OGS (except reviews) files, and then I tested it on Pandanet (IGS) files, where it promptly broke. The reason it broke was simple - Pandanet added a Copyright value into the metadata, and spread it over 4 or 5 lines (depending on where the SGF was created). I put Pandanet in a separate test, and focused only on that test. Running a single test in Python is as simple as: | ||
| python _test.py SGFItemTests.test_load_pandanet_sgf. | python _test.py SGFItemTests.test_load_pandanet_sgf. | ||
| - | I quickly concluded that using regex for this particular case was going to be tricky, as the number of lines wasn’t always uniform. Instead, I decided to adapt my readSGF function to simply not process the following lines when it discovers the Copyright value. | + | I quickly concluded that using regex for this particular case was going to be tricky, as the number of lines wasn’t always uniform. Instead, I decided to adapt my readSGF function to simply not process the following lines when it discovers the Copyright value.** |
| - | I do this by initializing a tempCount at 0, and setting it to a value of 6 when I can find “CoPyright[\n” in the string. I also added an ‘if’ to see if tempCount is greater than 0, and when it is, the counter is reduced by one and the loop follows the “continue” directive | + | Cela fonctionnait bien pour les fichiers OGS (sauf les revues) ; je l'ai testé ensuite sur les fichiers Pandanet (IGS), où elle a rapidement échoué. La raison de l' |
| - | I realize that this last section can be confusing to read. However, this is pretty much the final file, so viewing the links below should help clarify things. There were a few steps afterwards (such as when a file had no date), but they were simple enough to catch and solve when listening to the tests and batch running the file. | + | python _test.py SGFItemTests.test_load_pandanet_sgf |
| - | Conclusion | + | J'en ai rapidement conclu que l' |
| + | |||
| + | **I do this by initializing a tempCount at 0, and setting it to a value of 6 when I can find “CoPyright[\n” in the string. I also added an ‘if’ to see if tempCount is greater than 0, and when it is, the counter is reduced by one and the loop follows the “continue” directive (where it jumps to the next item in the loop). This effectively skips the plain english lines of text, removing the problems. I also noticed that some SGF files had a CP[] copyright line (such as the OGS review files), which was shorter than CoPyright. As such, I simply initialized tempCount at 5, instead of 6, which worked fine. The only reason I could do this was that the copyright notices always appeared before the game information, | ||
| + | |||
| + | I realize that this last section can be confusing to read. However, this is pretty much the final file, so viewing the links below should help clarify things. There were a few steps afterwards (such as when a file had no date), but they were simple enough to catch and solve when listening to the tests and batch running the file.** | ||
| + | |||
| + | Je fais ceci en initialisant un compteur temporaire tempCount à 0, puis en le réglant à une valeur de 6 quand je veux trouver « CoPyright[\n » dans la chaîne. J'ai ajouté aussi un « if » pour voir si tempCount est plus grand que 0 et, quand il l'est, je le réduis de 1 et la boucle suit la directive « continue » (quand elle saute à l' | ||
| + | |||
| + | Je réalise que cette dernière section peut être confuse à la lecture. Cependant, c'est à peu près le fichier final ; donc, la lecture des contenus des liens ci-dessous devrait clarifier les choses. Il y avait quelques étapes à la suite (comme quand un fichier n'a pas de date), mais elles étaient suffisament simples pour les voir et les résoudre quand je vérifiais les tests et lançais le fichier par lot. | ||
| + | |||
| + | **Conclusion | ||
| Anyone who follows the link to the Gist will notice a few things. Firstly, I sanitized the test files to remove any identifiable information. Especially since readers won’t have my test files and will therefore need to adjust the tests, I felt it helpful to label the information more generically. Secondly, there’s a bash file included. The reason for this is simple - I didn’t want to install the python script into a folder in my $PATH, as it would include other files as well and break the tests. Instead, I wrote the bash script in my $PATH, which appends the full path to the files, and then runs the Python script within its actual folder with absolute paths. You’ll need to adjust the path to main.py for your own system. | Anyone who follows the link to the Gist will notice a few things. Firstly, I sanitized the test files to remove any identifiable information. Especially since readers won’t have my test files and will therefore need to adjust the tests, I felt it helpful to label the information more generically. Secondly, there’s a bash file included. The reason for this is simple - I didn’t want to install the python script into a folder in my $PATH, as it would include other files as well and break the tests. Instead, I wrote the bash script in my $PATH, which appends the full path to the files, and then runs the Python script within its actual folder with absolute paths. You’ll need to adjust the path to main.py for your own system. | ||
| - | I hope this look into my TDD process might help inspire some readers to give it a shot, just as I have been inspired by others. Also, if there are any fellow Go players out there - perhaps you’ll find this tool useful for organizing your own SGF files. If you have any questions, suggestions, | + | I hope this look into my TDD process might help inspire some readers to give it a shot, just as I have been inspired by others. Also, if there are any fellow Go players out there - perhaps you’ll find this tool useful for organizing your own SGF files. If you have any questions, suggestions, |
| - | Homework (Optional) | + | Conclusion |
| + | |||
| + | Tous ceux qui suivront le lien vers Gist remarqueront certains points. D' | ||
| + | |||
| + | J' | ||
| + | |||
| + | **Homework (Optional) | ||
| My own goal for this script it to expand it over time. My first revision would be to add a stats calculation system, which will give me the overall stats across all the servers I play on (perhaps even details on wins against stronger/ | My own goal for this script it to expand it over time. My first revision would be to add a stats calculation system, which will give me the overall stats across all the servers I play on (perhaps even details on wins against stronger/ | ||
| Ligne 119: | Ligne 230: | ||
| http:// | http:// | ||
| - | https:// | + | https:// |
| + | |||
| + | Travail personnel (Optionnel) | ||
| + | |||
| + | Mon but personnel pour ce script est de l' | ||
| + | |||
| + | Pour aller plus loin | ||
| + | |||
| + | http:// | ||
| + | |||
| + | https:// | ||
issue143/c_c.1554038860.txt.gz · Dernière modification : 2019/03/31 15:27 de auntiee
