Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing unit test for DEFAULT_CONFIG and read_settings() comparison #3372

Open
egberts opened this issue Jul 22, 2024 · 3 comments
Open

Missing unit test for DEFAULT_CONFIG and read_settings() comparison #3372

egberts opened this issue Jul 22, 2024 · 3 comments
Labels

Comments

@egberts
Copy link
Contributor

egberts commented Jul 22, 2024

  • [XXX] I have read the Filing Issues and subsequent “How to Get Help” sections of the documentation.
  • [XXX] I have searched the issues (including closed ones) and believe that this is not a duplicate.

Issue

There were several Settings items that are drastically different between the DEFAULT_CONFIG and the read_settings() that needs to be resolved before passing a completely brand-new unit test:

An "ideal" passing unit test between DEFAULT_CONFIG and read_settings() entails:

    def test_no_arguments(self):
        """No argument used"""
        # Do not be stomping on default settings in the code
        previous_settings = copy.deepcopy(copy.deepcopy(DEFAULT_CONFIG))
   
        # Since no filespec is given, it falls back to DEFAULT_CONFIG
        current_settings = read_settings()
   
        assert_difference_in_settings(previous_settings, current_settings)
        assert current_settings == previous_settings

The above "ideal" unit test is not passing.

A helper function assist with identification of the differences in Settings:

def assert_difference_in_settings(previous: Settings, current: Settings):
    if previous != current:
        # Break down the difference(s)
        for this_item in previous:
            if this_item not in current:
                logger.error(f"Item {this_item} not in current settings: ")
            logger.error(f"  Previous item {this_item}: '{previous[this_item]}'.")
            assert this_item in current
        for this_item in current:
            if this_item not in previous:
                logger.error(f"Item {this_item} not in previous settings: ")
            logger.error(f"  Current item {this_item}: '{current[this_item]}'.")
            assert this_item in previous
        # Break down the difference(s)
        for this_item in previous:
            if previous[this_item] != current[this_item]:
                logger.error(f"Item {this_item} not the same between previous "
                             "and current settings: ")
                logger.error(f"  Previous item {this_item}: '{previous[this_item]}'.")
                logger.error(f"  Current item {this_item}: '{current[this_item]}'.")
            assert previous[this_item] == current[this_item]

I need to know from the Pelican community if DEFAULT_CONFIG update is an acceptable approach, DERIVING from the passing solution given below:

To make that particular unit test into a passing test, the following tweaks to the modified unit test is necessary:

    def test_no_arguments(self):
        """No argument used"""
        # Do not be stomping on default settings in the code
        previous_settings = copy.deepcopy(copy.deepcopy(DEFAULT_CONFIG))
        # Do we have a problem stomping some defaults to pass this test, is this ok?
        previous_settings['ARTICLE_EXCLUDES'] = ['pages']
        previous_settings['PAGE_EXCLUDES'] = ['']
        previous_settings['FEED_DOMAIN'] = ''

        current_settings = read_settings()
        current_settings['PATH'] = os.curdir

        assert_difference_in_settings(previous_settings, current_settings)
        assert current_settings == previous_settings

Platform

Click to expand

Platform

  • OS version and name: Linux 6.1.0-21-amd64 SMP PREEMPT_DYNAMIC Debian 6.1.90-1 (2024-05-03) x86_64 GNU/Linux
  • Python version: 3.11.2
  • Pelican version: HEAD (513abbf)
  • Link to theme: m.css
  • Links to plugins: pelican-plugins
  • Link to your site: n/a
  • Link to your source: n/a
@egberts egberts added the bug label Jul 22, 2024
@egberts egberts changed the title Missing unit test for DEFAULT_CONFIG and read_settings() comparison; `[pages]' missing. Missing unit test for DEFAULT_CONFIG and read_settings() comparison Jul 22, 2024
@egberts
Copy link
Contributor Author

egberts commented Jul 22, 2024

Wow, a 10-year old quickfix by @smartass101 in #1322 over at https://github.com/getpelican/pelican/blame/513abbfdc668946590194c637dbe90ec228aaf6f/pelican/tests/test_settings.py#L51

Thinking....

This is that moment the DEFAULT_CONFIG should have been changed, instead of fudging thetest_read_empty_settings unit test to pass.

@egberts
Copy link
Contributor Author

egberts commented Jul 22, 2024

FEED_DOMAIN, introduced in here @ line 222, 10 years ago by @justinmayer .

ARTICLE_EXCLUDES, introduced in here @ line 12 in pelican/settings.py, 10 years ago by @bbinet

PAGE_EXCLUDES, introduced same as ARTICLE_EXCLUDES above.

Looks pretty safe to bootstrap this DEFAULT_CONFIG up so that the unit test and by design is "correct".

Agreeing to this doesn't mean fix it, because PATH needs further examination.

@egberts
Copy link
Contributor Author

egberts commented Jul 22, 2024

Slightly off-topic: Let us keep in mind that read_settings() does make intensive adjustments (via deprecation and upgrades to its absolute file path) toward some of the Settings items if and only if reading an actual Pelican configuration file, so this particular unit test is only focused if and only if no valid configuration file were supplied to read_settings(path=) argument dealing with DEFAULT_CONFIG alone.

Only a load_source() (by using a default-conf.py file or a "pelicanconf" module name of a defaulted pelicanconf-default.py) and its unit tests can we then do the verbatim previous/DEFAULT_CONFIG and extracted/current settings comparison as-is.

By dealing with this issue's DEFAULT_CONFIG "normalization", we can then do proper testings of both load_source() and file-less read_settings() unit tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant