Loading...
 

Lab NoteBook

Screenshot 2023 03 27 At 15.31.42


Monday, February 6th
On our first day we met with Prof. Vergari and Prof. Rohan to discuss the general concepts and organize the project. We decided to do some literature review and prepared for our second meeting with supervisors to talk more in depth about the project and visit the labs. We set a meeting with supervisors every Friday to catch up on progress. Prof. Rohan sent us some articles and tutorials in order to understand the topic better and learn how to work with our toolbox which is 3D Slicer. We also worked on our tikiwiki and spent time learning how to work with it. We discussed about the sections we need to have and discussed the design. The rest of the day was spent dividing the tasks and entering them into the Gantt chart.

Tuesday, February 7th
We worked remotely due to the strike. We joined an online meeting provided by our supervisors about ‘’in vivo, ex vivo and in silico characterization of biological tissues and standard testing of inert materials’’ which was pretty useful to see other aspects of imaging techniques or different projects working on these topics. We searched for relevant articles and read the articles/tutorials that the professor provided for us.

Wednesday, February 8th
We had a meeting with our group and tried to explain what everyone understand from the literature reviews since we divided each section. We set a date for February 15th to present our project so we discussed about that. We also worked on our tikiwiki and tried to make a progress on it.

Thursday, February 9th
In the morning at 9:30, we had a meeting with Prof. Vergari and Prof. Rohan to check our advancement. During this meeting we decided to divide tasks for today. We said that Ara should download 3D Slicer, its extensions and watch the tutorial to learn how to work with the software. Yinshuang should read the 3D Slicer guide. And Kelly should look through the thesis that Prof. Rohan sent us before. After the meeting Ara and Yinshuang tried to download the software together since we had issues with Ara’s laptop. And they worked together to learn how to work with it. At the same time Kelly worked on the thesis. By the end of the day, we had a meeting with ourselves to check what everyone did and our progress.

Friday, February 10th
Today we had a training day with the ultrasound device. Prof. Vergari and Prof. Rohan teach us the principles of the machine, uses of different probes and how to actually work and scan with it. Then we tried to use the software that we needed to capture the video stream on the pc. But we had some difficulties with them so they handed it to us to fix the problem. Ara and Yinshuang worked on how to fix the problem with the software and find the problem, while kelly was working on the thesis and updating the calendar.




Screenshot 2023 03 27 At 15.42.56


Monday, February 13th
Today we had training for motion capture devices and Polaris systems specifically. We met Pr. Persohn and he introduced us to the Polaris system, how to work with it and software we need to use. Since we learned to work with the Polaris, we should be able to track the tools through the pc by using the software. Tomorrow we have an advancement institute meeting where we can also present our project. So today we worked on our presentation. Yinshuang prepared the slides, Ara and Kelly practiced to present it for tomorrow. We worked on our tikiwiki. Ara worked on the “meet the team” section and Gantt chart, Yinshuang worked on “materials and costs” and Kelly updated the calendar.

Tuesday, February 14th
Today we had aa institute meeting about “Ultrasound” at 10 am. First we had a presentation about the generality of ultrasound devices and then everyone presented their projects. We also presented our project. After that at 3 pm, we had another meeting with our supervisors and we updated Prof. Colloud about what we did so far. Since we need to use the ultrasound device, we discussed that we can have access to it for tomorrow the whole day and we set a meeting for Friday to catch up/ discuss our progress. We should decide if we want to use the tutorials from SlicerIGT website and Aurelie Sarcher or if we want to develop our own method. After discussing with supervisors, we decided that for now it's better if we go with the tutorials and everything that is already used, to see if we can make it work. So we started to follow the tutorials from “SlicerIGT” website.

Wednesay, February 15th
Today we started to work with the ultrasound device to see if we can capture the video stream. Ara and Yinshuang spent time to fix the problem and we succeeded in capturing the video with “Epiphan” software. Since we still don’t have the motion capture, we skipped using “Fcal” and tried to see if we can get the video on 3D Slicer. By using all the tutorials we have, we succeeded in having the video on 3D slicer as well, which was a big step since we had problem with it before. Then we tried different modules to see what else we can do with the programme. We were also able to record the video from the patient using ultrasound at that moment. At the same time Kelly was working on the literature review. In the afternoon we used Polaris device to also capture the coordinates of the biomarkers on 3D Slicer as well, but we had some difficulty since we didn’t have access to the software for it or the cable that connected to the pc. So Yinshuang found a tutorial for it and worked on it with Ara, as well as reading all the tutorials onSlicerIGT website. Kelly also read some tutorials for motion captures and how to connect them to 3D Slicer. And we decided to have another training day with Pr. Persohn to ask him our questions.

Thursday, February 16th
Today we worked remotely due to the strike. We have a meeting with our supervisors for tomorrow, so we decided to create a powerpoint and present them what we have done so far/ our progress. Since Kelly was close to the university, she went to see an experiment with ultrasound device. At the same time, Ara and Yinshuang worked on Polaris and tried to see if we can connect to “NDI” software without having an account. Also we worked on our tikiwiki and updated it.

Friday, February 17th
Today we practiced for our weekly report and we presented it to Prof. Vergari. Our weekly report consists of our accomplished tasks, our on progress tasks and our plans for next week. After that we tried to install the driver for the Polaris device. Since Prof. Vergari had the USB drive, he helped us with it but we had a problem with the configuration file. “Plus server” couldn’t recognize the device and we weren't able to capture the image. We tried different things but in the end we couldn’t succeed. So we decided with Prof. Vergari that Yinshuang still works on the Polaris to fix the problem and try to connect with NDI software, Ara starts to work with Vicon device at the same time so that we can see if we can get the image by another motion capture device and Kelly works on the 3D design of the tool attached to the probe, with Prof. Rohan.




Screenshot 2023 03 27 At 16.10.18


Monday, February 20th
Today prof. Persohn helped us install the NDI toolbox and NDI Cygna 6D, the two packages that we need to work with them for the Polaris device. In the toolbox we have three software such as “NDI capture”, “NDI configure” and “NDI track”. Ara and Yinshuang tried to work with them to fix the problem we had. First we added the Rom files we had for each tool in the NDI track and we were able to capture the positions of the markers. Then with NDI capture, we were able to capture the image from Polaris to our laptop. Now Polaris can successfully connect to Plus Server and communicate with 3D Slicer. But we realized that we need to close all the softwares from NDI. We can't use Plus Server and NDI softwares at the same time. While we were working with Polaris, Kelly tried to install and learn Fusion 360 for 3D design. Prof. Colloud suggested that since Polaris is working now, we focus on that and if we have time, try to work with Vicon system as well.

Tuesday, February 21th
Today Ara and Yinshuang used the ultrasound room and we moved Polaris there. At first we tried to see if we could have a connection with fCal, which is one of the primary software tools that comes downloaded with PlusServer. The config file worked and we had a successful connection. We also saw that Plus is able to communicate with each of the markers. Since we’re not using a reference, but we had a reference tool in the 'Tool state' section of fCal, we needed to delete it and we changed the config file. At first we had a problem since fCal couldn't make a connection but then we were able to make it work. Then Yinshuang created another config file which is the combination of two devices so that we can have signals from both devices. It also connects successfully with the software. Kelly worked on Fusion 360 for 3D design of the tool.

Wednesday, February 22th
We worked remotely today. We had a meeting to discuss what we're going to say for tomorrow's meeting with Prof. Colloud and since Kelly had some problems working with Fusion 360, we tried to help her with that.

Thursday, February 23th
At 9:30 am we had a meeting with Prof. Colloud to discuss our project. We told him our progress and talked about how we’re going to manage the next steps. Since we don’t have any problem with the software now and our next step is to 3D print the tool, Yinshuang and Ara decided to help Kelly with the Fusion 360. We need to 3D print the tool attached to the probe, because if we want to attach the markers to the probe with tape, they may shake due to not being firmly attached and we may have an error. so we spend the whole day designing the tool in Fusion 360.

Friday, February 24th
As we had the geometries of passive markers from NDI tutorial, we tried to 3D design the tool. Ara and Yinshuang worked on the 3D design. By watching some tutorials we were able to design the tool at the end, Yinshuang made the final design of it. We also designed something for the tool to attached to it. At the end we need to attach the tool to the probe. We decided to show our ideas and designs to Prof. Vergari on Monday.




Screenshot 2023 03 27 At 16.34.19


Monday, February 27th
Since tomorrow we have a meeting with our supervisors, we practiced for our presentation. We also worked on our website and updated different sections. We decided to divide 'About the project' section and make it in three parts. One part is for 'Ultrasound devices', one part is for 'Motion capture devices' and then we have 'About our project' section in which we explain our goal. We also decide that Ara writes for Ultrasound section, Yinshuang writes for Motion captures and Kelly write about our project.

Tuesday, February 28th
Today we had a meeting with our supervisors to discuss our progress. We also showed them our 3D design and got their feedback. Prof. Vergari said that he also will work on designing. We became more familiar with the rotation and transformation matrices that we need to do for our project. We arranged some meetings for next week and decided to do the temporal calibration tomorrow. For the rest of the day, we spent more time on rotation matrices to understand them more in depth. And tried to finish the tikiwiki sections.

Wednesday, March 1st
Today we start the temporal calibration. We used an old 3D printed design to attach it to the probe with the tool, since our new design is not ready yet. Since there was an experiment in the ultrasound room, we moved the old ultrasound device to the surgery room where the Polaris system is, and we connected both to our laptop. We filled the water bath because we needed to put the probe in the water and move it slowly from up to down. At the same time, the stylus is also in the position where Polaris can detect. fCal software is for temporal calibration so by that we checked if we have signals from the stylus and signal from the markers attached to the probe. Everything was ok so we started the calibration. Yinshuang worked with software for the calibration, at the same time Ara was moving the probe up and down and Kelly was holding the stylus. We had some problems with the calibration but in the end we did it and the results were satisfying.

Thursday, March 2nd
Today Prof. Vergari showed us his design for the tool attached to the probe, and we tried to 3D print it with Pr. Persohn. He helped us with the 3D printing machine. Our first attempt wasn’t successful due to the room temperature changing. We tried another time and this time our 3D printing was successful. We couldn't use the ultrasound room today since there were some experiments there. But tomorrow it’s free so we decided to do the spatial calibration for tomorrow. Our 3D printed design was perfectly matched with the probe and we were able to attach the tool with the markers on it as well. We finished the tikiwiki sections and Ara also writes the 'About our project' section.

Friday, March 3rd
Today we attended an experiment for intervertebral disk research. It was placed in the ultrasound room and with 5 different patients. It was helpful for us to see a more detailed experiment with the ultrasound device. After that Ara and Yinshuang tried to do the temporal calibration again. We tried different ways to have better signals and at the end Prof. Vergari suggested doing it in the metal container. Since it was metal and smaller, we got stronger signals and the quality of our calibration was better because our plots were more accurate and aligned this time. After that we start to do the spatial calibration. We used the tutorials from Aurelie and SlicerIGT website. For this part, we need to have the stylus tip at each corner of the ultrasound image. So Ara moved the stylus when it's under the probe inside the water bath, and tried to capture the tip at each corner of the image. At the same time Yinshuang worked with the pc to record the video from the ultrasound. After having the tip at each corner, Ara turned the probe 45 degree and then tried to have the tip at each corner another time. Hand-on work and recording the video was successful. Then we tried to do the fiducials which are the points (x,y,z) in slicer. It was successful but we have one problem with probe fiducials because everytime we tried to change its coordinates, it was still the same. We decided to work on fixing that for next week.




Screenshot 2023 03 27 At 17.02.38


Monday, March 6th
Today we tried again the spatial calibration but it still failed. We only got video stream from ultrasound. When we labeled the fiducials, the coordinates of the probes corresponding to the different stylus tips never changed. We noticed that when we used Aurelie's file in 3D slicer, the needles in the 3D model did not follow the probe movement. It is abnormal. We have several different hypotheses. It is possible that the StylustipToStylus transform in the files that we used did not match our setup. It is also possible that the experimental process went wrong and Polaris System could not keep tracking markers. It is also possible that our config file is incorrect meaning that we could not get the tracking data in 3D Slicer. So we first have to check if 3D Slicer can retrieve data from Polaris. We searched a lot on the Internet and tried several methods but we still can't confirm if Polaris data is sucessfully transferred to 3D Slicer.

Tuesday, March 7th
Due to the traffic strike, we worked remotely. Today, we stayed home and continued to search the internet for some relevant information. We found some configuration files and some tutorials that might work. Since we don't have the equipment, we'll have to try again tomorrow. We also updated our tikiwiki.

Wednesday, March 8th
Today, we were still working on the spatial calibration. We first tried using default config files adapted for Polaris System provided by PlusServer and the "calib_spatial" scene file in the tutorial but we still couldn't see the needles moving. We did not figure out the reason. Fortunately, Yinshuang finally found a template scene package for stylus tracking in SliceIGT data web. When we use this template and the default config file for Polaris, we can successfully track the tool and stylus and see the transform matrix from tool to tracker and from stylus to tracker (all of them have a corresponding needle automatically show in the 3D window) So based on the default config file that worked , we created a new config file which sets the stylus tip as the origin of the stylus and shows the transform from “StylusToTracker”, “ProbeToTracker” and “StylusToProbe”. But we haven't managed to obtain the video stream yet with the new config file.

Thursday, March 9th
We had a meeting with Prof. Claudio this morning. We talked about our progress and the problems that we met. He gave us some advice about how to improve the config file. After meeting, we changed the config file and we also changed the scene file that Aurelie gave. In the config file, we set the stylus tip as the origin of stylus, only keep the transform from stylus tip to probe. The config file transfers the video stream from ultrasound and the transform matrix from Polaris. We also deleted the transform "StylusToProbe" in Aurelie's scene file. This time, we successfully saw video stream and moving needle which is related to the transform from stylus tip to probe in 3D slicer. After lunch, we followed the tutorial and continued doing spatial calibration. We finally got the linear transform from image to probe. But we were not sure if the transform was correct or not. So, we tried to do a volume reconstruction based on a pen to check if 3D slicer can reconstruct a pen with the same size. But we haven't finished it.

Friday, March 10th
Yesterday we tried to do volume reconstruction for the pen. But the pen was too light and too thin, so it was a little difficult to scan. So we switched to a transparent ruler and a shallow sink. We successfully recorded the video sequence but when we wanted to crop volume, the 3D slicer could not run due to out of memory. Even we changed 3 different computers, it didn’t work. So we had to check where is the problem. We found that the ROI that we chose didn’t fit the video. Besides, the image from ultrasound didn’t follow the movement of "ProbeToTracker" Transform's needle in the 3D model's window. Ara found some tutorials for changing the ROI while Yinshuang changed the structure hierarchy of the whole scence to make the image following the "ImageToProbe" transform. Finally, the ROI changed to an adequate size automatically. The image also could follow the movement of needle but it always disappeared when moving. We this is our problem to fix for next week.



Screenshot 2023 03 27 At 18.56.10


Monday, March 13th
This morning, all the supervisors had a meeting so we didn't have access to the ultrasound room without their keys. We were doing some research on the Internet and also trying to use ZoteroReference plugin in tiki wiki for the whole morning. After lunch, Kelly worked on the bibliography and Yinshuang and Ara continued doing volume reconstruction. Following Aurelie's tutorial on structure hierarchy, we put image_image under the "ImageToProbe" transform, which in turn is under "ProbeToTracker". But the image was reformatted and not shown. Only the "Image Slice" bar was moving, which meant that the image was now following the movement of the probe. We only put the image_image under ImageToPorbe, the image disappeared again.Then, Prof.Claudio came and helped us. We only put the image_image under ProbeToTracker. In 3D model's window, the needle was following the movement of probe. Then we inserted a fake "ImageToProbe" transform which only minimized the size of image and proved some translation. The image still followed the movement of needle. We also confirmed that we can do real time volume reconstruction.

Tuesday, March 14th
This morning we attended the monthly institute meeting. After meeting , we still struggled with volume reconstruction. As the image didn't show up under "ImageToProbe" transform which we got from spatial calibration, we thought that there might be something wrong with our spatial calibration. So Yinshuang and Ara did the spatial calibration again while Kelly was woriking on the part of bibliography in tiki wiki. This time, we got a new transform and the RMSE error is between 1 and 3. This means that the result is more reliable that the last one. When we put image under the new "ImageToProbe" transform. The image appeared in 3D window. But when we put image_image under the "ImageToProbe" transform, which in turn was under "ProbeToTracker, we still didn’t see the image. With Prof. Claudio, we faked a "ImageToProbe" transform again which is very close to this time's "ImageToProbe" transform. And we finally got the image moving in the 3D window. Then we did the scan and the volume reconstruction. But we failed to crop the volume. We will figure out the reason and also check our transform again tomorrow.

Wednesday, March 15th
Today, Ara and Yinshuang were still trying to reconstruct volume in real time. And Kelly was working on the part of bibliography. In the morning, Yinshuang and Ara did several times spatial calibrations to verify if the transform "ImageToProbe" was reliable. We found that the transform and RMSE changed every time but they were always similar to each other. We also found that when the lower depth of detection is, the smaller errors we got. Then we turned to do volume reconstruction. We scanned different objects such as 3D printed bone, the transparent ruler, our forearm and our finger. The volume got from the ruler was the best. We measured the length of the 3D ruler reconstructed. The error is within 3mm which means the reconstruction is enough accurate. But the problem we couldn't figure out is that the volume reconstruction worked with our manual tabbed "ImageToPorbe" transform matrix but didn’t work with the transform matrix automatically saved during spatial calibration even if they were the same. As we were soon to complete project, Claudio suggested that we do some more in-depth scientific research and quantitative analysis.

Thursday, March 16th
Today, Kelly worked on the bibliography. Ara and Yinshuang were trying to do segmentation separately based on the volume got yesterday. Ara did the segmentation in the "axial" plan which was the most common plan used during segmentation while Yinshuang performed it in the “sagittal” plan where the volume is more visible than in the axial one. Ara and Yinshuang both built the model of ruler in 3D. Yinshuang's one seems been restored a little better than Ara's. But both of them is not smoothed enough. During segmentation, as the bottom of the sink was also recorded in the volume, we all felt that it was quite difficult to distinguish the bottom and the ruler. Maybe it will be more clear when we do the volume reconstruction better. At the end, we also used module "Segment Statistics" to calculate the volume of the model which is approximately 93256 mm3.

Friday, March 17th
Today, we worked on tikiwiki. Kelly worked on the bibliography. Yinshuang uploaded the Google Spreedsheet version of the Gantt chart and Ara added different sections in the protocol. After lunch, we discussed about how to do the following study about our project. To evaluate the accuracy and reproducibility of our protocol, we needed to repeat the measurements from calibration to volume reconstruction and change parameters such as scan depth. We also started to complete the protocol. We redid the experience and recorded a video of the operation as a tutorial.



Screenshot 2023 03 27 At 19.09.04


Monday, March 20th
Today we worked on our website. Kelly was preparing her presentation about bibliography. Yinshuang and Ara tried to upload the video that we recorded. But tiki wiki only accepts file under 8 MB so we had to minimize them. But even then, we still couldn't get the video to display on our website. We spent a lot of time trying to solve this problem. Finally, we found out that tiki wiki uses adobe flash player, but it is no longer available. So we created a Youtube channel to put our long videos and for short ones, we changed it to gif. We also tried to meet Prof.Floren and discuss about our progress today, but he was too busy, so we were going to meet him tomorrow.

Tuesday, March 21th
Today, Yinshuang and Ara still contributed to our website. We uploaded all the video on the website. And we started to do 3D scan on other objects. We used our finger, the 3D printed femur and our forearm. But because of the size of sink and the depth of scan, we can not get images with good quality. What's more, we also found it is different to isolate the muscle when reconstructing the volume. We discussed with Prof. Floren and Prof.Claudio separately and we decided to do the repeatability and reproducibility analysis first. Prof. Floren and Prof.Claudio endorsed our tiki wiki and gave us some advice about improving it. Kelly presented the bibliography to Prof. Claudio.

Wednesday, March 22th
Today, we started to do the analysis about repeatability. We decided to do the experiment 3 times by each with the same depth. Ara did 3 times experiments about spatial calibration and volume reconstruction. She also finished the segmentation. Kelly did experiments once. And Yinshuang set up everything for them on the laptop and recorded the data about each experiment. Each experiment took a long time. We could not finish all the trials in one day. So we will continue doing it in the following days.

Thursday, March 23th
Because of the strike, we worked remotely. Yinshuang started designing the poster and prepared some content to put on the poster. Ara selected some photos about our project and put them on the gallery section. We were improving our tiki wiki based on supervisors' suggestion.

Friday, March 24th
This morning, we attended the institute meeting about "Optimizing walking ability in people with a lower limb amputation: Man vs Machine" presented by Prof.Han HOUDIJK. After lunch, we continued doing the repeat experiments. Yinshuang did 3 times and Ara did once again. Kelly also did once but maybe because she hided the probe during scan, we couldn't get any valuable fiducials of the probe. Today, the 3D slicer didn’t work well. Sometimes, we couldn't get live video stream while sometimes we couldn't get the signal from probe. So, we had to restart 3D slicer every time which costed a lot of time.



Screenshot 2023 04 04 At 15.01.04

Monday, March 27th
Today, we still repeated experiments. Both Yinshuang and Kelly only did once because there were always some problems in 3D Slicer. We always lost the video stream or the location of the probe. We had to restart the 3D Slicer and reconnected with Plus Server Launcher again and again. And finally, we installed the newest version of 3D Slicer and it seems work well. Ara redesigned all the weekly reports. Yinshuang and Ara discussed the layout of our poster then Yinshuang finished the first version of it.

Tuesday, March 28th
As there was a strike, we worked remotely. This morning, we had a meeting with our supervisors at 10 am. We discussed how to analyze our data got from spatial calibration and volume reconstruction. We also talked about our poster. Prof. Claudio and Prof. Floren gave us many suggestions about our poster. So after meeting, Yinshuang and Ara redesigned the poster based on the supervisors' advices.

Wednesday, March 29th
This morning, we met Prof. Claudio at ultrasound room and we tried to implement 3D ultrasound imaging on Claudio's laptop to ensure that they would follow our protocols even if we left. Our protocol successfully worked on Claudio's computer and it took almost two and a half hours. After testing our protocol, Claudio gave us some useful advice about how to do scientific analysis based on our data. We decided to evaluate the repeatability, reproducibility and accuracy only by comparing the length of the reconstructed ruler. He also gave us a spreadsheet which can calculate the average, repeatability, reproducibility automatically once we put our data of measurement. So in the afternoon, we measured the length of 9 reconstructed ruler.

Thursday, March 30th
Today, Yinshuang calculated the accuracy of our method. Yinshuang and Ara worked on the second version of our poster. We showed our poster to Claudio again. He gave a lot of suggestions and there are still a lot to be improved especially the materials and methods part. Yinshuang and Ara felt the ruler and finger that we reconstructed still not good enough. So we redid the experiment several times in the afternoon. Kelly worked on the discussion part of our poster.

Friday, March 31th
Today, Yinshuang segmented the ruler and finger reconstructed yesterday. Ara worked on the third version of our poster. After lunch, Claudio came to help us improve our poster. As today is the last day of our project. We prepared some chocolates to share with everyone in the institute. We returned the materials and we extended our special thanks to Prof. Sylvain and Alexandre. Our poster is complete but we need to work on a few very small details.