Saturday, May 2, 2026

Website Change Detection

I recently encountered a work-related scenario in which I felt that it would be beneficial for me to know when a website had been updated. The website in question performs periodic updates of certain kinds of data in a downloadable file format, but it does not offer a notification mechanism (e.g., via email, RSS, or other technology) when such changes occur. That led me to explore options for website change detection.

There appear to be many options available, but I found that most of them required subscriptions. Because I am merely evaluating these technologies, I was only looking for free options. That led me to sign up for accounts at Visualping and PageMonitor. I configured both of them to monitor my blog, https://digitaldaddyla.blogspot.com/.

Visualping and PageMonitor use slightly different methods to determine if a change has occurred. For Visualping, setting up a new monitor requires entering the URL and specifying either an AI prompt to describe what changes you are looking for or “Any changes” which they label as “No AI used” as depicted below. I used the “Any changes” option. I also had the option to specify the frequency of page checks, and I chose “every day”. It did not allow me to specify an exact time.

For PageMonitor, setting up a task requires that you specify a URL to display the current webpage. From there, you specify both an “Anchor area” and “Region of interest” by drawing boxes around both regions. The region of interest is the part of the page you want to monitor. The anchor area is a reference point that is used to relocate the region of interest each time the page is checked—this should be an area of the page that is not expected to change. You then specify how often to run the task. When I chose “once a day”, it prompted me to enter a time, for which I think I chose 7 AM.

I have since published 7 new blog entries and have been receiving email notifications from both Visualping and PageMonitor. For the purpose of this blog post, I am presenting my findings based on review of change logs in each of my accounts. Here is a summary.

I have several observations. First, I noticed that Visualping failed to detect my most recent blog post on 4/17/2026, while PageMonitor has not missed any new blog posts. It is not clear to me why Visualping did not detect the 4/17/2026 blog post. One possibility is that I have not logged in to my account since setting up the monitoring job. I received an email from Visualping on 5/2/2026 that stated, “We have not seen you for 3 months! We would like to confirm that you are still interested in us checking things for you. Please login in the next 3 days to keep your current monitoring frequency. Otherwise, your job frequency will be reduced to checking only once a month.” However, my monitoring frequency would have to have been reduced prior to 4/17 to explain the false negative.

Second, the elapsed time between blog publication and webpage change detection was generally 1 day for both Visualping and PageMonitor. However, there was one blog post (Chicken Al Pastor and Oxford Commas) which was not detected by Visualping until after 2 days.

Third, Visualping seems to detect website changes at different hours of the day, while PageMonitor allowed me to specify exactly what time to run my daily task. Notice that the transition from 7:01 AM to 8:01 AM can be explained by Daylight Saving Time beginning on Sunday 3/8/2026.

My final observation is that PageMonitor occasionally alerted me to changes to my blog when I didn’t make any. For example, the “3D Printing Without Wi-Fi” blog that I published on 2/13/2026 was correctly detected on 2/14/2026. However, PageMonitor detected changes on 2/25/2026 and 2/26/2026. I suspect that maybe an image did not load on 2/25 which resulted in a conclusion that the page appeared different, and then the image properly loaded on 2/26 which resulted in another conclusion that the page changed again.

Another example of a false positive from PageMonitor is from my “3D Printing and Firearm Blocking Technology” blog post on 4/17/2026. Following a successful detection on 4/18/2026, it falsely detected a change on 4/29, errored out on 4/30, and falsely detected another change on 5/1, even though I did not make any edits to the blog post or publish any new blogs.

In conclusion, based on a small sample size, Visualping appears to err on the side of false negatives, and PageMonitor seems to err on the side of false positives. It is possible that some of these errors could be due to webpage itself (e.g., images not loading). In any case, I find both Visualping and PageMonitor to be useful for detecting changes to websites. If you are looking for free options, I would recommend checking out both of them.

Friday, April 17, 2026

3D Printing and Firearm Blocking Technology

On February 17, 2026, California introduced Assembly Bill 2047 which is known as the Firearm Printing Prevention Act. It would require several things to happen:

  • On or before July 1, 2027, the Department of Justice must publish written guidance on performance standards for persons or entities engaged in the creation of firearm blueprint detection algorithm to be certified for use by 3-dimensional printer manufacturers, as specified.
  • On or before January 1, 2028, the Department of Justice must accept applications for certification of firearms blueprint detection algorithms and begin issuing certifications of algorithms that meet or exceed the performance standards.
  • On or before July 1, 2028, any business that produces or manufactures 3-dimensional printers for sale or transfer in California must submit to the Department of Justice an attestation for each make and model of printer they intend to make available for sale or transfer in California, confirming that the manufacturer has equipped that make and model with a certified firearm blueprint detection algorithm.
  • On or before September 1, 2028, the Department of Justice must publish a list of all the makes and models of 3-dimensional printers whose manufacturers have submitted complete self-attestations and would require the department to update the list no less frequently than on a quarterly basis and to make the list available on the department’s internet website.
  • On March 1, 2029, the bill would prohibit the sale or transfer of 3-dimensional printers that are not equipped with firearm blocking technology and that are not listed on the department’s list of manufacturers with a certificate of compliance verification.

The bill would authorize a civil action to be brought against a person who sells, offers to sell, or transfers a printer without the firearm blocking technology. It would also make it a crime to knowingly disable, deactivate, uninstall, or otherwise circumvent any firearm blocking technology.

The bill refers to a couple of terms which deserve exploration. According to Assembly Bill 2047, “firearm blocking technology” means hardware, firmware, or other integrated technological measures capable of ensuring a three-dimensional printer will not proceed to any print job unless the underlying three-dimensional printing file has been evaluated by a firearms blueprints detection algorithm and determined not to be a printing file that would produce a firearm or illegal firearm parts. The bill also states that “firearm blueprint detection algorithm” means a software service that evaluates three-dimensional printing files, whether in the form of stereolithography (STL) files or other computer-aided design files or geometric code, to determine if the files can be used to program a three-dimensional printer to produce a firearm or illegal firearm parts, and flag any such files to prevent their use to manufacture a firearm or illegal firearm parts.

I searched the web to try to find companies or individuals who have created such technologies or algorithms, and the search results mainly yielded articles and videos about the 3D printing legislation in Washington, New York, and California. I then asked ChatGPT to summarize what it knows about firearm detection technology, and it stated that Thingiverse uses AI to detect and remove gun design files, and there are experimental tools such as 3D GUN’T. However, the solutions seem to be immature. ChatGPT concludes that the firearm blueprint detection algorithms mentioned in legislation are “largely hypothetical or early-stage” and “reliable prevention at the printer level is an unsolved problem” which is consistent with my observations.

I think that AI approaches are the best way to address this need, but I can also think of many challenges to doing it accurately. First, 3D models are not always designed so that the finished physical object is contained in a single file—they are often provided in multiple parts. Splitting a model could be necessary because the object is too large to fit on a standard print bed. It could also be because different parts of a model need to be printed with different materials (e.g., to add strength or flexibility) or colors. It could also be that certain features of a model are best printed in a certain orientation to optimize strength, improve print bed adhesion, reduce the need for support material, or factors to minimize chances of print failure. The bottom line is that when models are split into multiple objects, it could become difficult for firearm blocking technology to accurately understand that many parts, when assembled, would resemble a firearm.

Second, firearms come in many shapes and sizes. I suppose that with enough training data, AI-based detection methods could learn what many different kinds of firearms look like. But what happens when users modify (or “remix” as the 3D modeling community would say) models so that they differ from training data? For example, what if a 3D model of a gun is presented in the form of a kit card? Its overall geometry would be a square or rectangle. When the borders and connectors of the kit card are snapped off, it would look like a gun, but that would happen in post-processing (downstream of the AI detection). Or what if a 3D model of a firearm was natively designed with support material? The support material could make the overall geometry significantly different than the firearm after all the support material was removed. Could firearm blocking technology be reliable enough to understand all of this?

Third, will firearm blocking technology be capable of understanding functional capabilities of 3D models? In other words, could it tell the difference between a “real” functional firearm and a non-functional prop? What if someone wants to print a replica of Han Solo’s blaster for a Halloween costume or a Star Wars convention? Would firearm blocking technology have a high enough false positive rate that it could become a burden to print legitimate models that pose no danger to society?

Perhaps there are current solutions to these challenges, or maybe technology will advance rapidly enough in the next couple years that these problems will be in the rear view mirror. In any case, I believe we have a major problem with guns in the United States, and I would love to see progress on reducing injury and death from firearms. However, it feels to me that the 3D printing legislation is misdirected, and I fear that it will adversely affect hobbyists like me while doing little to nothing to curb illegal activity because criminals will just find ways to circumvent firearm blocking technology.

For an additional perspective, read The Dangers of California’s Legislation to Censor 3D Printing by the Electronic Frontier Foundation.

Saturday, April 11, 2026

Sending a Fax in 2026

The other day my wife gave me 4 pages of paper and asked me to take it to a store and fax it to its destination. I suspected that this was the most expensive and inconvenient method to send the document. Based on various online sources, sending a domestic outgoing fax at FedEx costs approximately $2.50 for the first page, followed by approximately $2.00 for each additional page, so a 4-page fax would cost approximately $8.50. Prices would be similar at The UPS Store, Staples, Office Depot, and other similar offerings and of course would vary from store to store.

Therefore, I asked her to consider alternative options. Could the document be sent as a PDF file via email? She told me that email was unfortunately not an option and that it had to be sent via fax.

I read online that some public libraries offer free or low-cost fax services. I checked the website for our local library, and unfortunately it did not list faxing as a service at that branch. I wanted to call the library to ask if they offered fax services, but unfortunately it was after hours.

Finally, I decided to use an online fax service. Not having ever used an online fax service, I asked ChatGPT to recommend one with a good reputation and fair pricing. It offered a couple of options, and I somewhat randomly went with FaxZero.com, although I am sure that there are many other online fax services with competitive offerings. The process was simple. I first scanned my 4-page document to a PDF file. I then entered information about the sender and receiver and attached my PDF file. There was an option to enter text for a cover page, but I left it blank. Then I paid $3.29 via credit card (note that sending faxes up to 3 pages is free) and sent the fax. Email confirmations were provided upon initial transmission and successful sending of the fax.

I appreciated many aspects of the online fax service. First, we could send a fax without purchasing a physical fax machine. Second, we could send the fax from the comfort of our own home and avoid locating and driving to a physical store, and potentially waiting in a line or waiting for an agent to assist us. Third, we did not have to wait for the fax to transmit—instead, we simply received an email notification upon job completion.

If I was ever asked to fax something, I’d still first search for better alternatives such as email, but if I absolutely had to fax something, I’d definitely consider using an online fax service again due to its convenience and lower cost in comparison to in-store options.

Monday, March 16, 2026

Essay Grading - Human vs. Machine

For my daughter’s high school, I volunteered to read and score scholarship applications that were submitted by graduating seniors. There were 10 categories of applications including Academic Excellence, Arts, Athletics, Leadership, School Service, and others. All applications consisted of an essay, and some of the categories required the submission of supplemental information such as photos, videos, or other information to support the applicant’s scholarship candidacy. Parent volunteers were placed in groups of 3, with each group asked to review 4 or 5 applications. Parents were provided with a grading rubric and were asked to independently evaluate each student’s submission. To reduce the chance of bias, parents were asked to be reassigned to another group if they knew the student.

The grading rubric consisted of 5 dimensions for a total of 20 points:

Followed Directions
2 points – Followed most or all directions
1 point – Followed some directions
0 points – Followed no directions

Answered Essay Prompt
3 points – Answered the prompt completely
2 points – Mostly answered the prompt
1 point – Somewhat answered the prompt
0 points – Essay has nothing to do with the prompt

Well-Written and Use of Good Grammar
5 points – Essay is well-written and almost all of the grammar is correct
4 points – Essay is somewhat well-written and most of the grammar is correct
3 points – Essay is adequately written and the grammar is somewhat correct
2 points – Essay is sloppily written and has numerous grammatical errors
1 point – Essay is poorly written and has many grammatical errors
0 points – Essay is incomprehensible

Provided Examples of Supporting Evidence
5 points – Completely supported essay with examples of evidence
4 points – Mostly supported essay with examples of evidence
3 points – Somewhat supported essay with examples of evidence
2 points – Provided a few examples to support essay
1 point – Did not provide enough examples to support essay
0 points – Provided no examples to support essay

Impact of Essay
5 points – Essay was outstanding and made the reader feel invested in the student’s essay
4 points – Essay was good and the reader felt connected to the student’s essay
3 points – Essay was okay and the reader understood what the student was trying to express
2 points – Essay had a point and the reader didn’t lose interest while reading the essay
1 point – Essay was poor and the reader had to work to engage with the essay
0 points – Essay was disjointed and the reader was unable to connect with the essay

Up to 2 bonus points were also given for applications that required supplemental information, but I’ve omitted those criteria for brevity.

After submitting my scores, I wondered how my scores compared to those of other parents. Because I was the first volunteer in my group to complete my assignment I did not have visibility into how the other 2 parents scored the students’ applications. However, I was able to externally validate my scores against those of various large language models (LLMs).

METHODS

There are too many LLMs to count nowadays, so I consulted the 7 that I was most familiar with, and I’ve listed the most probable models that each one is likely to have used as of the time of this writing. Some LLMs are more transparent with the identification and versioning of their free and paid models. For all 7 models, I used the free tier.

  • ChatGPT: Default model: GPT-5.2 Instant; Fallback model: GPT-5.2 Mini or similar lightweight version if you exceed limits
  • Claude: Sonnet 4.6
  • Copilot: Copilot model, built by Microsoft
  • DeepSeek: DeepSeek-V3.2
  • Gemini: Gemini 3
  • Grok: Grok 4.20 beta, Auto (Fast or Expert)
  • Perplexity: model not shown or configurable on free plan

I used the exact same prompt for all 7 LLMs and all 4 students:

You are a parent of a high school student who has volunteered to evaluate scholarship applications. Students who apply for a scholarship under the category of SCHOOL SERVICE are given the following essay prompt: “What contributions have you made to our high school as someone who serves this community?” Students who apply for a scholarship under the category of LEADERSHIP are given the following essay prompt: “Would others consider you a leader and why?” OR “What is your definition of a leader and how do you embody those characteristics?”

The grading rubric is provided in the attached “Essay Scoring Guidelines.pdf” file. Provide scores as whole numbers for the following dimensions in accordance with the scoring guidelines:

1. Followed Directions (0-2 points)
2. Answered Essay Prompt (0-3 points)
3. Well-Written and Use of Good Grammar (0-5 points)
4. Provided Examples of Supporting Evidence (0-5 points)
5. Impact of Essay (0-5 points)

Ignore the “Bonus Points” dimension in the scoring guidelines because the scoring of that dimension may involve evaluation of photos or videos. The student’s essay is attached. Provide the score for each of the 5 dimensions along with a brief justification for each score.

For each model, I pasted the prompt and attached the essay scoring guidelines in a PDF file along with a PDF file the essay for student 1. I continued using the same chat thread, so I only attached the PDF files of the essays for students 2-4, as re-attaching the scoring guidelines repeatedly for each student would have been redundant. For privacy reasons, I have de-identified the student names and am not sharing the actual student essays.

RESULTS

My ratings, along with those of the 7 LLMs, are as follows (click the image to enlarge):

Although the LLMs did provide brief justifications for their scores, I’ve included only the numeric results but could easily furnish the complete LLMs responses upon request.

Overall, there was general agreement between my ratings and the average ratings from the 7 LLMs.   In terms of rank order, I gave the highest scores to Student 1 (19 points), followed by Student 4 (17), Student 2 (15), and Student 3 (12). Using the average of all 7 LLMs, the highest score went to Student 1 (19.7), followed by a 2-way tie between Students 2 and 4 (19.1), and then Student 3 (15.4). In other words, the LLMs agreed with my ratings for the best and worst applications, although they did not draw a distinction between the two applications in the middle of the pack.

Across the board, I was equally or more critical of the essays than the LLMs, as the LLMs generally gave the same or higher scores in each of the 5 dimensions of the grading rubric. Upon examining the total number of points allocated across LLMs, the 3 most “lenient” graders were Grok (78 total points awarded), Perplexity (77), and Copilot (76), while the “strictest” graders were Claude (69), ChatGPT (70), and Gemini (70).

DISCUSSION

All 7 LLMs were up to the task of grading the essays in accordance with the grading rubric. I considered the possibility that some LLMs might not completely follow directions, but all of them adhered precisely to the grading criteria and listed scores that were concordant with the criteria. Some LLMs even tallied up the total scores for each student even though I did not specifically request it in my prompt, and when they did so, they performed addition without any errors.

There are several possible explanations for the differences between my ratings and the LLM ratings. First, it is possible that I’m a tough grader. I went into this activity thinking that these were all brilliant students, and it would not be helpful if all the students clustered around near-perfect scores. In fact, this is exactly the outcome that was observed with the LLMs, as students 2 and 4 were deadlocked in a tie. Second, it is possible that the LLMs were lenient graders. After all, sycophancy in LLMs has been well-documented and researched, and many companies have made concerted efforts to tone down the level of sycophancy as they introduced new versions of their models.

This experiment validates that LLMs can be used to assess the quality of written text when evaluated against a custom rubric. This is probably not surprising to many readers who have already engaged with LLMs in similar ways, including myself. However, this is the first time I’ve quantified my findings. Another key takeaway is that LLMs can be used to critically appraise a body of written text so the author has a chance to make revisions based on the feedback. In academic settings, the mere usage of LLMs is not tantamount to cheating. It’s the way in which an LLM is used that constitutes whether the LLM serves as a learning aid or if it is used to cheat. In work settings, I encourage professionals to take full advantage of LLMs to enhance learning, spark creativity, and optimize productivity. As long as LLMs are used in a way that they do not substitute critical thinking, I think we have a lot to gain.

Wednesday, March 4, 2026

Chicken Al Pastor and Oxford Commas

I was driving my wife home after she had a medical procedure, and she asked me to buy her some food from Chipotle. I listened with apprehension as she rattled off a litany of food items and ingredient customizations, as I knew there would be no way that I’d get all the details right. You see, my wife has very particular preferences when it comes to food. So to ensure that I had the best chance of getting her order correct, I asked her to text the instructions to me. She initially refused, saying that I make no effort to remember her preferences. I said that if I have to remember more than 2 or 3 things about her order, I will screw it up and she will be upset. Besides, I was driving and trying to find the restaurant so wasn’t able to pay enough attention to commit her customizations to memory. So she relented and sent me the following text message (verbatim and therefore in quotes):

“Chicken Al pastor, brown black beans corn, green sauce and salsa on side”

And while I was still driving, she verbally told me to take her phone and redeem an offer for free queso by scanning a QR code provided in the app and also scan her rewards number so she could earn points. I felt that I could remember those last 2 instructions because they were the last things she mentioned, and all the other details were in the text message. I had never heard of chicken al pastor, nor did I know that Chipotle had that on their menu. It turns out that it is a time-limited offer. Note that the link may not work when the offer expires, but here’s a screenshot from that site.

After struggling to find parking, my wife stayed in the car while I entered the store and read my wife’s text message to the server. I was asked, “Burrito or bowl?” to which I requested a bowl which my wife usually gets (BTW, I received no credit for knowing the answer to this question). I also had the intuition to know that “brown black beans” meant “brown rice and black beans” despite the instructions being technically incomplete/illogical (no credit for that one either). After carefully crafting my wife’s gourmet meal, I scanned the QR code for the free queso offer and scanned the QR code for my wife’s rewards program and paid. Mission accomplished, or so I thought (foreshadowing).

When we got home, my wife asked where the green sauce was. I told her that I saw them put the green sauce in the bowl. She complained that she wanted the green sauce on the side, NOT IN THE BOWL.

----- Begin Side Conversation About Oxford Comma -----

One could argue that there was no Oxford comma in her text message, so it should have been clear that both the green sauce and salsa needed to be put on the side. However, if you read the entire text message, the punctuation is wildly inconsistent, so no reasonable person could definitively conclude that the absence of an Oxford comma necessarily meant that the green sauce should have been put on the side. Plus, I am pretty sure that my wife does not know what an Oxford comma is.

----- End Side Conversation About Oxford Comma -----

Anyway, the situation was quite upsetting to her, as she continued to complain that I never try to understand her. I found the situation to be somewhat amusing actually because not only did I anticipate that this would happen, I also called it out and tried to prevent it from happening, and it happened anyway. It’s not that I don’t try to understand my wife as a person, I just have low tolerance for complexity when it comes to fast food, so I try to shift the burden of perfecting an order back on her, and therefore I think she is partially correct on that criticism of me. Also, I think I have been conditioned to just accept that whatever I do, it will be wrong, and I will be blamed anyway.

When I order food, I’ll usually accept whatever normally comes with the dish, or in the case of a build-your-own dish scenario, I’ll just have everything. Honestly I don’t really care that much if I get white or black rice, brown or black beans, or green or red salsa. I certainly don’t need things put on the side, just dump everything in and save a plastic container from taking up space in landfills. Besides, I will eventually mix it all together and everything will come out the other end looking the same regardless of how it was prepared. And if someone orders food for me, I will say “thank you” and happily eat the food. No complaints, no drama.

I am not saying that people should not have detailed food preferences. I just think they should not impose their expectations on others and get upset when people fall short of those expectations. Also, a clearer text message such as this one could have prevented the snafu:

“Chicken al pastor in bowl, brown rice, black beans, corn, green sauce on side, salsa on side. In Chipotle app, redeem offer for free queso and scan rewards code.”

It is specific and understandable, and I just demonstrated how an Oxford comma in combination with other clear communication could have saved the day. Oh what could have been!

Friday, February 13, 2026

3D Printing Without Wi-Fi

Today I was unable to send a print job wirelessly from my Mac to my Bambu Lab A1 3D printer because our Spectrum internet service went down.

I am accustomed to sending print jobs wirelessly to my 3D printer, in fact I have never done it any other way. Because I can turn my iPhone into a hotspot with my Visible Wireless cellular plan, I connected both my laptop and 3D printer to my hotspot. The connection was slow partly because I have the basic plan with 5 Mbps hotspot speeds but also because my 3D printer is located on the first floor where cellular reception is somewhat spotty. It is good enough for phone calls but not so great when it comes to transmitting larger amounts of data.

I sliced my model in Bambu Studio as I normally do. I then sent the print job which normally occurs in 2 phases. First, Bambu Studio uploads the print job from my laptop to Bambu Lab’s cloud service. Second, it downloads the print job from the Bambu Lab cloud to the 3D printer. It slowly but successfully uploaded the 4.1 MB print job to the cloud. However, the 3D printer struggled for a while to download the print job from the cloud and eventually failed.

Therefore, I reverted to the tried and true local printing method via microSD card which bypasses the internet. After slicing my model in Bambu Studio, instead of sending the print job via the cloud, I chose the “Export plate sliced file” option. From there, a “Save sliced file as:” dialog box allowed me to save a .gcode.3mf file. I placed the .gcode.3mf file in the root directory of the microSD card that came with my Bambu Lab A1 3D printer and powered up the printer. After staring up, I pressed the “Print Files” option on the home screen and selected my .gcode.3mf file. From there, I was able to toggle options for AMS, dynamic flow calibration, and bed leveling, just as I would have done when sending a print job from Bambu Studio via cloud printing. It worked like a charm.

With my first 3D printer, a Creality Ender 3 V2 Neo, I printed exclusively via microSD card because it did not offer a wireless option (at least not natively). Although printing via microSD card is not complex, it certainly is more convenient for me to send print jobs wirelessly than to transfer my microSD card between my computer (2nd floor) and 3D printer (1st floor). Some folks have concerns about privacy when sending print jobs through Bambu Lab cloud services, but I have no such concerns because all my prints are for fun and entertainment, and I have nothing to hide. I like the convenience of cloud printing and will appreciate it even more after my Spectrum internet service is restored!

Wednesday, February 11, 2026

3D Model Figurine Generators

I started 3D printing as a hobby in May 2023. At the time, most of my 3D prints were of models that other people created and uploaded to free online repositories such as the ones I’ve described here. I then took the next logical step of learning a CAD application called Tinkercad to create my own simple models. For some specific use cases, I’ve experimented with 3D modeling streets and terrain. Generative artificial intelligence exploded onto the scene in recent years, and now there are many websites that allow users to upload a photo and automatically generate a 3D figurine without knowing anything about mesh modeling of curved surfaces. In this post, I compare 2 free 3D figurine generators: PrintU by Bambu Lab and FanForge by Creality.

As depicted in the image at the top, I uploaded the same photo to PrintU and FanForge and generated 3D models. Both PrintU and FanForge had relatively easy to understand wizard interfaces, and both websites offered variations for how to generate the 3D models. I generated 3 variations in each application, and screenshots of the 3D models are presented below.

As you can see, the 3 models generated by PrintU were far more realistic than the ones generated by FanForge. In fact, the FanForge models did not even remotely match the facial features that were in the uploaded photo. The FanForge models were more “artistic” which could partially explain their deviation from reality, but if the starting point is a bust photo, I have an expectation that the resultant model should bear some resemblance.

I’d have to experiment with additional models generated by different photos before drawing more definitive conclusions, but my initial impression is that PrintU is the clear winner in this head to head comparison.