-
Notifications
You must be signed in to change notification settings - Fork 429
NumericToCategoricalEncoding Input Transform. #2907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
NumericToCategoricalEncoding Input Transform. #2907
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2907 +/- ##
===========================================
- Coverage 100.00% 99.98% -0.02%
===========================================
Files 212 212
Lines 19778 19813 +35
===========================================
+ Hits 19778 19811 +33
- Misses 0 2 +2 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
@saitcakmak @Balandat any thoughts on this? It would be totally fine for me, if you say that you do not see this functionality directly in botorch. Then I would integrate it into our codebase, which is also totally fine ;) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation seems reasonable to me. I'd be curious to see a concrete example using this end-to-end, including fitting a model and optimizing the acquisition function.
Similar to @saitcakmak , I would also be curious about how this compares against using a kernel that may work on the categoricals directly.
A `batch_shape x n x d'`-dim tensor of where the integer encoded | ||
categoricals are transformed to a vector representation. | ||
""" | ||
if len(self.categorical_features) > 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the whole transform would be a nullop if no categorical_features
is provided, should you instead just not allow this and error out? Or do you want to be able to apply the transform universally even if there aren't any categorical features?
"""Transform categorical parameters from an integer representation | ||
to a vector based representation like one-hot encoding or a descriptor | ||
encoding. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good to have a description of how the columns in the output of the transform are organized. Ideally there would be a concrete example in the docstring.
}, | ||
) | ||
|
||
torch.manual_seed(randint(0, 1000)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to defy the point of locking the seed?
cont = torch.rand(3, 2, dtype=dtype, device=self.device) | ||
|
||
X_numeric = torch.cat( | ||
[cat_numeric.view(-1, 1).to(dtype=dtype, device=self.device), cont], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need for the .to()
, can do that directly above: cat_numeric = torch.randint(0, 3, (3,), device=self.device, dtype=dtype)
|
||
Args: | ||
dim: The dimension of the numerically encoded input. | ||
categorical_features: A dictionary mapping the index of each |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this arg name could be more descriptive, e.g. numeric_cardinality
or sth like that
@@ -1625,6 +1625,122 @@ def _expanded_perturbations(self, X: Tensor) -> Tensor: | |||
return p.transpose(-3, -2) # p is batch_shape x n_p x n x d | |||
|
|||
|
|||
class NumericToCategoricalEncoding(InputTransform): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wait, should this not be CategoricalToNumeric
?
cat_numeric1.view(-1, 1).to(dtype=dtype, device=self.device), | ||
cat_numeric2.view(-1, 1).to(dtype=dtype, device=self.device), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment as above about to()
Motivation
This PR refers to #2879. It adds a new input transform that transforms a categorical degree of freedom encoded a an integer into some kind of vector based description. This could be for example a one-hot encoding, but also a descriptor encoding as it is often used in chemistry. It adds the possibility to use the alternating acqf optimizer also with surrogates that do not treat categoricals as integer based values. For example one could then also use a SAAS GP with the mixed alternating acqf optimizer and treat the categoricals under the hood as one-hots.
Have you read the Contributing Guidelines on pull requests?
Yes.
Test Plan
Unit tests, most of them are implemented (also to demonstrate the functionality), the ones which check the equality between transforms and correct behavior of transform on train etc. are still missing. My plan is to add them after a first feedback after a first review.