Try to work on replacing the dummy neural network with the one Phil gave to us
Finish bounding box work
Finish cropping of target images
Finish the pre-defined image database
Get video output working
Meeting with client 11/04/2019 @ 3:30:
We should schedule a meeting in Phil’s lab sometime at the beginning of this week. That way we can:
Test our work on his workstation / the lab server
See the old demo code that he was using?
How should cropping of target and scene images work in the UI?
Users click twice on the image, box is drawn between those two points.
RE: the problems with sending target and scene images of different size:
Phil thinks this isn’t something that we should be worrying about - most likely an issue with the backend code.
New feature requests:
It would be nice if users were able to choose in the application how many bounding boxes were shown.
Instead of choosing a number of bounding boxes, some threshold value for confidence would be nice too.
Meeting with manager 8/04/2019 @ 10:30:
Keep working on interaction between webcam and server
Get ready for tech talk
Work on the hand-off plan
Meeting with manager 1/04/2019 @ 10:30:
Pull in frames from the webcam so that they can be sent to the server
Add version number information to the test documentation
Finish the how-to doc plan
Meeting with manager 19/03/2019 @ 6:15:
Get client-server interaction workable
Provide a fully working demo by the next meeting time
Begin investigating webcam interop
Client meeting 4/03/2019 @ 12:30:
When users upload an image on the initial screen, they should be able to crop them.
Make sure that the application window is resizable and looks OK when resized.
Phil will provide us with a TAR file containing pre-defined target images which he would like to include in the application. He would like us to make it so that this collection could be changed by him in the future without too much work.
The target images come in pairs. One shows the ‘front’ of the object and the other shows the ‘back’.
The numpy arrays should be in an RGB format when passed to the object detection code. No A.
In Phil’s GitHub repository, the test_tdid.py file contains the necessary imports for his object detection code and shows how to call the code.
Meeting with manager 4/03/2019 @ 10:30:
Again, the meeting after spring break will be Tuesday @ 6:15
We are going to try and have a working demo by the time of that meeting
Meeting with manager 25/02/2019 @ 10:30:
Meeting after spring break will be Tuesday @ 6:15
Set up weakly team meetings
Our goal for next week is to finish the remaining two scenes. The second scene should be fully functional, the third scene will just be a template.
We will try to begin playing around with the interop between JS and Python.
The tech talk that we give should be a demo of some technology. Other students should be able to make a simple working program using the technology afterwards.
Email Victor re: our decision of tech talk subject.
Meeting with manager 18/02/2019 @ 10:30:
Add information about React to platform selection
Add link to wireframes on the website
Write down a few possible subjects for a tech talk, along with possible talking points for each member
By next week, we will try to finish the three screens of the application:
Ben will work on adding React information to the platform selection document
Andrew will finalize the DB choice and add information regarding the selected DB to the platform selection document
Duncan will continue working on HTML, CSS layout for the app
All will continue working on JSX side of app
Meeting with manager 11/02/2019 @ 10:30:
Next week’s deliverable is an architecture diagram
Work on first step of process in electron – allowing user to upload target images to the application
Andrew will work on storing the library of images in a DB
Ben will work on JS
Duncan will work on html and css framework skeleton code for page
Jay will work on wireframing and JS
Meeting with manager 4/02/2019 @ 10:30:
Remove language that might be too technical from the concept document. In particular, remove the term “GUI” wherever it appears.
Put the tweet currently on the main page of our website in the concept document.
Write a paragraph explanation of our project on the main page in place of the tweet.
“teach lead” –> “tech lead” on roles page.
Somewhere in the user stories document, reference the client’s notes given to us after the first meeting to clarify what we mean by group one, group two, …
By next meeting, provide a list of alternatives to electron, along with the positives and negatives of each
Meeting with manager 28/01/2019 @ 10:30:
Create a page for deliverables to be posted on
Create a single email address for the entire team (put the email on the contact page)
Finalize team roles by next meeting
Decide upon a framework to use by next meeting
Finish user stories by next meeting
Client meeting 23/01/2019 @ 10:00:
Initial meeting with client
Main purpose of meeting was to better understand the project / what was wanted of us
Client provided us with a Google Doc re: his goals / objectives