Skip to content

feature #58141: Consistent naming conventions for string dtype aliases #61651

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

pedromfdiogo
Copy link

Key implementation steps:

  • Created factory functions (string, datetime, integer, floating, decimal, boolean, list, categorical, interval, period, sparse, date, duration, map, struct) to generate pandas dtypes (e.g., StringDtype, Int64Dtype, ArrowDtype) based on parameters like backend, bits, unit, and precision.
  • Added support for Pandas, NumPy and PyArrow backends, enabling seamless switching (e.g., integer() returns Int64Dtype for Pandas or ArrowDtype(pa.int64()) for PyArrow).
  • Implemented parameter validation to ensure correct usage (e.g., validating mode in string() to be "string" or "binary", and unit in datetime() for NumPy).
  • Integrated PyArrow types for advanced dtypes (e.g., pa.float64(), pa.list_(), pa.map_()), supporting modern data processing frameworks.
  • Implemented comprehensive tests in test_factory.py to validate dtype creation across all functions, ensuring correct behavior for different backends, verifying string representations (e.g., "double[pyarrow]" for pa.float64()), and confirming proper error handling (e.g., raising ValueError for invalid inputs).
  • Addressed PyArrow compatibility by implementing correct method calls, such as using pa.bool_() for boolean dtypes, ensuring proper integration.

This change simplifies dtype creation, reduces duplication, and ensures compatibility across backends, making it easier to extend support for new dtypes in the future.

…ype aliases

Key implementation steps:
- Created factory functions (string, datetime, integer, floating,
  decimal, boolean, list, categorical, interval, period, sparse, date,
  duration, map, struct) to generate pandas dtypes (e.g., StringDtype,
  Int64Dtype, ArrowDtype) based on parameters like backend, bits, unit,
  and precision.
- Added support for both NumPy and PyArrow backends, enabling seamless
  switching (e.g., integer() returns Int64Dtype for NumPy or
  ArrowDtype(pa.int64()) for PyArrow).
- Implemented parameter validation to ensure correct usage (e.g.,
  validating mode in string() to be "string" or "binary", and unit in
  datetime() for NumPy).
- Integrated PyArrow types for advanced dtypes (e.g., pa.float64(),
  pa.list_(), pa.map_()), supporting modern data processing frameworks.
- Implemented comprehensive tests in test_factory.py to validate dtype
  creation across all functions, ensuring correct behavior for different
  backends, verifying string representations (e.g., "double[pyarrow]"
  for pa.float64()), and confirming proper error handling (e.g., raising
  ValueError for invalid inputs).
- Addressed PyArrow compatibility by implementing correct method calls,
  such as using pa.bool_() for boolean dtypes, ensuring proper
  integration.

This change simplifies dtype creation, reduces duplication, and ensures
compatibility across backends, making it easier to extend support for
new dtypes in the future.

Co-authored-by: Pedro Santos <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ENH: Consistent naming conventions for string dtype aliases
1 participant