Progress in Features Development

Monday, August 19, 2024 – Last week, I started by thoroughly testing the new feature for importing Excel spreadsheets, ensuring that it could handle various errors to provide a smooth user experience. Once I felt confident that most scenarios were covered, I deployed the latest changes to the cloud server.

Next, I shifted my focus to researching message queues, specifically RabbitMQ. I began by creating console projects following the provided documentation, but I encountered a roadblock when trying to run the Docker command to build the RabbitMQ container. I forwarded this issue to Mr. Peter, who guided me the next day on running the correct command, and thankfully, the Docker container was successfully built.

Simultaneously, I worked on developing a small feature that involved implementing a single button. Given the simplicity of the task, I prioritized completing it before returning to RabbitMQ testing. The following day, I successfully ran the feature but noticed that it lacked detailed error handling, and additional filters were needed. Moreover, the query associated with this feature required optimization, as it initially took about 7 seconds to execute. By the end of the week, I managed to optimize the query, reducing the execution time to just 1.5 seconds. However, I discovered a discrepancy in the calculated data from this feature compared to the existing data from another module.

Next week, my priority will be identifying and resolving this bug before I can proceed with the message queue task.

Implementing Excel Import Functionality and Tackling Minor Bug Fixes

Monday, August 12, 2024 – Towards the end of last week, I was tasked with creating a new functionality that allows data to be imported and read from an Excel file. This was a straightforward task requiring only two buttons for functionality. I started by working on the view and view model to get a comprehensive understanding of how the UI should operate and what data needed to be sent to the API.

Since this was my first time working with Excel import functionality, I spent some time familiarizing myself with the existing code to understand how previous imports were handled. After drafting the view and view model, I moved on to developing the API. My plan was to extract the data list from the imported Excel file, send it to the API, and then process it accordingly. After some work, I successfully completed the task, and the new feature worked as expected. However, after a review by Mr. Peter, I realized I had overlooked validation error handling for cases where the imported Excel file might contain faults or unexpected data.

In addition to this task, I also worked on fixing a few minor bugs related to previously developed features. These included adjusting the UI to fit the standard screen resolution of 120% used by most user, adding progress percentage logs to keep track of the progress via log files, and optimizing an API query that was taking too long to execute. By the end of the week, the Excel import feature required some minor checks for additional conditions before I could push the latest changes to the cloud server.

Navigating Migration Challenges and Debugging with Stack Traces

Monday, August 5, 2024 – After last week’s failed migration attempt, I identified the root cause as a mismatch between the data structure of the migration and the current table. After several attempts to resolve the issue, I decided to remove the problematic table entirely and recreate it using the existing SQL migration files.

Moving on to my next task, I needed to debug an issue where data generation was failing, causing the progress bar to close automatically without any visible error. This behavior led to errors going unnoticed. To address this, I added more robust error handling and logging to ensure that any errors would be flagged to the user. However, even after implementing these measures, the source of the error remained unclear. At this point, Mr. Peter suggested using stack traces for complex functions and demonstrated how to apply them effectively. With stack trace error handling, we could pinpoint the exact line where the error was thrown, making troubleshooting much easier.

Later in the week, Mr. Peter assigned me to research message queues and identify the best platform for our upcoming tasks. After exploring various options, I became particularly interested in RabbitMQ and delved into its documentation to learn more about this service.

Combining Data Tables for Efficient Referencing

Monday, July 29, 2024 – Last week, after I pushed all my changes to cloud server, Mr. Peter found two API tests failing. I quickly investigated the tests and found that one was failing because I had added a new table but hadn’t updated the data for that specific test. However, the second test revealed a loophole in my logic execution.

The issue stemmed from two newly created tables meant to store information about user type A and user type B. These tables were designed to serve as references to reduce the amount of data in a single table. During the unit test for this specific API execution, it was discovered that user type A actually required a reference from the user type B table, which only had an established relationship with a different kind of the table. This error necessitated rearranging the tables and combining user type A and user type B into one table so that both could be referenced without any obstruction.

As the week was ending, I made the necessary changes but encountered a migration failure. I plan to resolve this issue on Monday as quickly as possible before moving on to the next task.

Improving Performance Without Raw SQL

Monday, July 22, 2024 – In the previous week, I implemented a raw SQL query to set a column in a table to a condition, resolving a bug where the loading bar never ended. While the query executed correctly, Mr. Peter advised that, since we are using the EF Core approach, we should minimize the use of raw SQL queries unless absolutely necessary.

During this time, I also discovered another bug where a page displayed an “object set to reference” error. Upon careful debugging, I found that the error was due to the recent addition of a new table. The query from this table set a reference but didn’t correctly group it, resulting in an inability to map the correct data.

After solving the bug, I quickly worked on removing the raw SQL query. Initially, I tried using a foreach loop and experimented with multiple foreach iterations. However, waiting for all loops to finish before sending the results back to the UI caused a waiting time of about one minute, which was impractical. I then experimented with executing the task using async/await to see if there was any improvement in performance, but the total execution time remained similar.

After some consideration, to avoid long waiting times for an unknown response, I decided to implement the foreach loop along with a progress bar for the delete process.

Solving Report Section Bugs and Performance Issues

Monday, July 15, 2024 –Last week, I encountered a few bugs related to the report section. The first bug was with the overall margin calculations, which were incorrect even though the formula was right. After generating the margins and filtering a group of data with the same reference, some groups had a similar bug pattern. Specifically, one margin in the list was calculated as zero when it shouldn’t have been. I realized the issue was due to committing the generated margins in batches. I had forgotten to include an important function in the last batch, which caused the bug.

The second bug occurred in another section with multiple combo boxes for selecting dates and years. When a user selected a year, the combo box that should have displayed all years was null. Initially, I thought the problem was in the view model. However, after thorough debugging, I discovered the issue was with the API, which lacked a query to retrieve all existing years for the specific data. Once I identified the source of the bug, I was able to fix it.

The third bug of the week involved a button intended to remove a table of data. The query was supposed to be straightforward, but the loading bar never seemed to end. The problem was that the query took too long for the loading bar to handle. I solved this by implementing a raw SQL query, which made the function run in under ten seconds.

I pushed all the changes and will continue with my next task next week.