Yet Another Contribution To Filer

I really like working in Filer. I'm not saying this just to get brownie points with my prof, who helps maintain the project. I'm saying it because I genuinely like working on it. This is probably pretty evident in the fact that I keep going back to work on it. -- as of today, 3/5 of my contributions for DPS909 have been for Filer. I'll admit that familiarity was part of what kept drawing me back to the project. The other part?

I really like writing tests.

I discovered this after my previous experience in Filer, where I had to make a test for one of the module's functions. The test I "wrote" wasn't that difficult to make -- it was mostly a copy-paste of a pre-existing test with some tweaks (hence the quotation marks around wrote, ha). The tricky part was more understanding what I had to do in order to create the test -- it was like trying to put pieces of a jigsaw puzzle together. I had a pre-existing test to use as a template, some general instructions on what I needed to make the new test do, syntax guides in the form of other tests. Putting it together was was an interesting and rewarding experience... Even if it was a very small fix.

So when the time came around to pick my fourth bug to work on, I decided to jump back to Filer and tackle a more difficult issue.

Making The Tests

I knew from browsing the issues tab before that there were still a few issues up that were related to tests, so I checked them over and decided to try my hand at solving this one. This time around, I wanted to do more than just copy-paste a test and tweak a few things... I wanted to copy-paste three tests and tweak a few things!

...I'm just kidding. ...Mostly. Though I did use pre-existing code as a template for the tests I wrote, these ones weren't nearly as cut-and-dry as the first test I'd created. They required a bit more thought and a bit more tinkering to get working, which was exactly what I was hoping for.

I started with the test I thought would be easiest to get working -- AKA, the one I understood the purpose of best. That test involved calling the fs.write() function on a file that hadn't been opened with a "write" flag. Easy stuff, right? Change the flag when the file was being opened, and presto, the test is done! ...Except it's not, because now I have to deal with the error that gets thrown. Funny how that works.

Filer's tests use Chai, so I headed on over to their documentation for BDD styles to figure out what I needed to do. It took about two minutes for me to figure it out -- Chai makes it nice and easy. Expecting an error? Use something like "expect(error).to.exist". It's practically plain English -- just with some added punctuation. The hardest part of this was determining the error code, which comes after expecting an error. I used "ENOENT" at first, since that's what other tests seemed to use, but when I ran the tests, it threw an error. Said error contained the code I needed -- EBADF. I switched that in for ENOENT, and the test ran perfectly.

I repeated this process twice for the other tests. There was nothing complex about this, nothing extremely involved, but I enjoyed making them nonetheless. I enjoyed seeing all the green "tests passed!" in the console even more, after running these tests I'd created. With everything working, I put up a pull request. No word on anything I need to fix yet -- I'm sure there's probably something I missed, due to inexperience -- but the CodeCov report noted an increase in coverage, which was very rewarding to see.

Comments