You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have run your model on raster images (that are 2d slices of 3d models, with background removed), and I find that they often show bleeding edges on my dataset's images. This might be due to my specific setup.
I've tried various processings to try and clean it up, as my end goal is to vectorize the image I get from dexined's forward pass, and so far the best was to add a forward pass of an additional, sketch-simplification model (eg this) after running dexined's forward.
I'm wondering, if this question is relevant at all for what you're trying to achieve, what you think of merging this two-step process (Dexined then sketch simplification) into a single trained model.
Here is one example of input image/after dexined/after dexined+sketch simplification
INPUT IMAGE
AFTER DEXINED
AFTER DEXINED + sketch simplification model (this will get vectorized to a .svg)