Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

io/romio314 does not work with minloc/maxloc datatypes #5009

Closed
hjelmn opened this issue Apr 4, 2018 · 25 comments
Closed

io/romio314 does not work with minloc/maxloc datatypes #5009

hjelmn opened this issue Apr 4, 2018 · 25 comments

Comments

@hjelmn
Copy link
Member

hjelmn commented Apr 4, 2018

Thank you for taking the time to submit an issue!

Background information

Now, I don't know if these should work at all. I really want these datatypes killed in MPI-4.0.

What version of Open MPI are you using? (e.g., v1.10.3, v2.1.0, git branch name and hash, etc.)

master

Describe how Open MPI was installed (e.g., from a source/distribution tarball, from a git clone, from an operating system distribution package, etc.)

Built from source.

Please describe the system on which you are running

  • Operating system/version: macOS 10.13
  • Computer hardware: MacBookPro14,3
  • Network type: Shared memory

Details of the problem

When using the romio314 component the MPI_File_write* functions fail because they call MPI_Type_get_contents() on our non-contiguous predefined datatypes (MPI_DOUBLE_INT, MPI_SHORT_INT, etc). If these types are allowed for files (I REALLY HOPE NOT) then this is a bug.

Reproducer:

#include <stdlib.h>
#include <stdio.h>

#include <mpi.h>

struct we_should {
  double kill;
  int minmax;
};

int main (int argc, char *argv[]) {
  MPI_File mfh;
  struct we_should foo = {.kill = 1.0, .minmax = 1};

  MPI_Init (NULL, NULL);

  MPI_File_open (MPI_COMM_WORLD, "/tmp/typetest.out", MPI_MODE_CREATE | MPI_MODE_WRONLY,
                 MPI_INFO_NULL, &mfh);

  MPI_File_write_all (mfh, &foo, 1, MPI_DOUBLE_INT, MPI_STATUS_IGNORE);
  MPI_File_close (&mfh);


  MPI_Finalize ();
  return 0;
}

Expected result: success

Actual result:

[mordor:83551] *** An error occurred in MPI_Type_get_contents
[mordor:83551] *** reported by process [3613327361,0]
[mordor:83551] *** on communicator MPI_COMM_WORLD
[mordor:83551] *** MPI_ERR_INTERN: internal error
[mordor:83551] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[mordor:83551] ***    and potentially your MPI job)
@hjelmn
Copy link
Member Author

hjelmn commented Apr 4, 2018

@bosilca @roblatham00 @edgargabriel Thoughts? This works fine with ompio.

@hjelmn hjelmn added the bug label Apr 4, 2018
@edgargabriel
Copy link
Member

@hjelmn : I have not checked the standard in details, but I am spontaneously not aware of any logical reason why not to allow these datatypes. In the end, exactly the same datatype constructed using MPI_Type_create_struct would be perfectly legal, so the question really is just whether its a problem because its a predefined datatype.

@hjelmn
Copy link
Member Author

hjelmn commented Apr 4, 2018

@edgargabriel I believe so. MPI_Type_get_contents is not allowed on a named predefined datatype.

It sounds like this is a romio bug.

@roblatham00
Copy link
Contributor

Sounds like it would be sufficient to check which combiner MPI_TYPE_GET_ENVELOPE returns, except I don't see MPI_DOUBLE_INT in Table 4.1 ( combiner values returned from MPI_TYPE_GET_ENVELOPE) . You said these are non-contiguous predefinded datatypes... but are they MPI_COMBINER_NAMED ?

@hjelmn
Copy link
Member Author

hjelmn commented Apr 4, 2018

@roblatham00 Yeah. MPI_SHORT_INT is internally non-contiguous, MPI_DOUBLE_INT is externally non-contiguous. Open MPI is returning MPI_COMBINER_NAMED for the combiner from MPI_Type_get_envelope.

@hjelmn
Copy link
Member Author

hjelmn commented Apr 11, 2018

@roblatham00 Any progress on a fix? I might have time today to attempt to fix it myself and I need the fix to test a patch that makes romio MPI-3 friendly.

@roblatham00
Copy link
Contributor

here try this. I don't think it's entirely correct... these built-in types are not contiguous, right?

0001-it-is-illegal-to-call-get_contents-on-a-built-in-typ.patch.txt

@hjelmn
Copy link
Member Author

hjelmn commented Apr 11, 2018

Ok, will try that now.

Yes, they are non-contiguous.

@hjelmn
Copy link
Member Author

hjelmn commented Apr 11, 2018

Not quite. It aborts with an assert outside of lldb but doesn't under lldb (because of course it does). The file produced under lldb for MPI_SHORT_INT doesn't have the correct data because there still is a gap in the datatype.

These struct predefined types are a real PIA.

@hjelmn
Copy link
Member Author

hjelmn commented Apr 11, 2018

It works if I create a dummy datatype in ADIOI_Flatten_datatype():

struct adio_short_int {
  short elem_s;
  int elem_i;
};
    if (MPI_SHORT_INT == datatype) {
      MPI_Datatype _types[2] = {MPI_SHORT, MPI_INT};
      MPI_Aint _disps[2] = {0, offsetof (struct adio_short_int, elem_i)};
      int _blk_lens[2] = {1, 1};
      MPI_Type_create_struct (2, _blk_lens, _disps, _types, &datatype);
      MPI_Type_commit (&datatype);
    }

Not a real fix, just a test.

@hjelmn
Copy link
Member Author

hjelmn commented Apr 12, 2018

Well, here is a total hack that works (but sucks). It gets me running for testing:

struct adio_short_int {
  short elem_s;
  int elem_i;
};

struct adio_double_int {
  double elem_d;
  int elem_i;
};

struct adio_long_int {
  long elem_l;
  int elem_i;
};

struct adio_long_double_int {
  long double elem_ld;
  int elem_i;
};

static int ADIOI_Type_get_envelope (MPI_Datatype datatype, int *num_integers,
                                    int *num_addresses, int *num_datatypes, int *combiner)
{
  MPI_Datatype _types[2];
  MPI_Aint _disps[2] = {0};
  int _blk_lens[2] = {1, 1};
  int rc, is_contig;

  ADIOI_Datatype_iscontig(datatype, &is_contig);

  rc = MPI_Type_get_envelope (datatype, num_integers, num_addresses, num_datatypes, combiner);
  if (MPI_SUCCESS != rc || MPI_COMBINER_NAMED != *combiner || is_contig) {
    return rc;
  }

  if (MPI_SHORT_INT == datatype) {
      _types[0] = MPI_SHORT;
      _types[1] = MPI_INT;
      _disps[1] = offsetof (struct adio_short_int, elem_i);

      MPI_Type_create_struct (2, _blk_lens, _disps, _types, &datatype);
      MPI_Type_commit (&datatype);
      rc = MPI_Type_get_envelope (datatype, num_integers, num_addresses, num_datatypes, combiner);
      MPI_Type_free (&datatype);
  } else if (MPI_DOUBLE_INT == datatype) {
      _types[0] = MPI_DOUBLE;
      _types[1] = MPI_INT;
      _disps[1] = offsetof (struct adio_double_int, elem_i);

      MPI_Type_create_struct (2, _blk_lens, _disps, _types, &datatype);
      MPI_Type_commit (&datatype);
      rc = MPI_Type_get_envelope (datatype, num_integers, num_addresses, num_datatypes, combiner);
      MPI_Type_free (&datatype);
  } else if (MPI_LONG_DOUBLE_INT == datatype) {
      _types[0] = MPI_LONG_DOUBLE;
      _types[1] = MPI_INT;
      _disps[1] = offsetof (struct adio_long_double_int, elem_i);

      MPI_Type_create_struct (2, _blk_lens, _disps, _types, &datatype);
      MPI_Type_commit (&datatype);
      rc = MPI_Type_get_envelope (datatype, num_integers, num_addresses, num_datatypes, combiner);
      MPI_Type_free (&datatype);
  } else if (MPI_LONG_INT == datatype) {
      _types[0] = MPI_LONG;
      _types[1] = MPI_INT;
      _disps[1] = offsetof (struct adio_long_int, elem_i);

      MPI_Type_create_struct (2, _blk_lens, _disps, _types, &datatype);
      MPI_Type_commit (&datatype);
      rc = MPI_Type_get_envelope (datatype, num_integers, num_addresses, num_datatypes, combiner);
      MPI_Type_free (&datatype);
  }

  return rc;
}

static int ADIOI_Type_get_contents (MPI_Datatype datatype, int max_integers,
                                    int max_addresses, int max_datatypes, int array_of_integers[],
                                    MPI_Aint array_of_addresses[], MPI_Datatype array_of_datatypes[])
{
  int dontcare, combiner;
  MPI_Datatype _types[2];
  MPI_Aint _disps[2] = {0};
  int _blk_lens[2] = {1, 1};
  int rc;

  rc = MPI_Type_get_envelope (datatype, &dontcare, &dontcare, &dontcare, &combiner);
  if (MPI_SUCCESS != rc) {
    return rc;
  }

  if (MPI_COMBINER_NAMED != combiner) {
    return MPI_Type_get_contents (datatype, max_integers, max_addresses, max_datatypes,
                                  array_of_integers, array_of_addresses, array_of_datatypes);
  }

  if (MPI_SHORT_INT == datatype) {
      _types[0] = MPI_SHORT;
      _types[1] = MPI_INT;
      _disps[1] = offsetof (struct adio_short_int, elem_i);

      MPI_Type_create_struct (2, _blk_lens, _disps, _types, &datatype);
      MPI_Type_commit (&datatype);
      rc = ADIOI_Type_get_contents (datatype, max_integers, max_addresses, max_datatypes,
                                    array_of_integers, array_of_addresses, array_of_datatypes);
      MPI_Type_free (&datatype);
  } else if (MPI_DOUBLE_INT == datatype) {
      _types[0] = MPI_DOUBLE;
      _types[1] = MPI_INT;
      _disps[1] = offsetof (struct adio_double_int, elem_i);

      MPI_Type_create_struct (2, _blk_lens, _disps, _types, &datatype);
      MPI_Type_commit (&datatype);
      rc = ADIOI_Type_get_contents (datatype, max_integers, max_addresses, max_datatypes,
                                    array_of_integers, array_of_addresses, array_of_datatypes);
      MPI_Type_free (&datatype);
  } else if (MPI_LONG_DOUBLE_INT == datatype) {
      _types[0] = MPI_LONG_DOUBLE;
      _types[1] = MPI_INT;
      _disps[1] = offsetof (struct adio_long_double_int, elem_i);

      MPI_Type_create_struct (2, _blk_lens, _disps, _types, &datatype);
      MPI_Type_commit (&datatype);
      rc = ADIOI_Type_get_contents (datatype, max_integers, max_addresses, max_datatypes,
                                    array_of_integers, array_of_addresses, array_of_datatypes);
      MPI_Type_free (&datatype);
  } else if (MPI_LONG_INT == datatype) {
      _types[0] = MPI_LONG;
      _types[1] = MPI_INT;
      _disps[1] = offsetof (struct adio_long_int, elem_i);

      MPI_Type_create_struct (2, _blk_lens, _disps, _types, &datatype);
      MPI_Type_commit (&datatype);
      rc = ADIOI_Type_get_contents (datatype, max_integers, max_addresses, max_datatypes,
                                    array_of_integers, array_of_addresses, array_of_datatypes);
      MPI_Type_free (&datatype);
  } else {
    return MPI_ERR_TYPE;
  }

  return rc;
}

Change most calls to _get_enevelope and _get_contents to call these functions. The only exceptions are the ones in the #ifndef MPISGI code blocks.

@hjelmn
Copy link
Member Author

hjelmn commented Apr 12, 2018

Ok, that allowed me to find a bug in my MPI-3 patch for romio. I ran into an interesting issue though. mpi_test_suite in MTT fails on all the minloc/maxloc types. It might be that I hacked these types in wrong or that there are further bugs in my patch. I hacked OMPI to make these types contiguous and the tests all pass. Out of curiousity, what is the expected extent of one of these types on disc?

For example, if I write an MPI_SHORT_INT on a platform with sizeof (short) = 2, sizeof (int) = 4 and alignof(int) = 4 should I get 6 bytes on disk or 8? @edgargabriel ompio packs them to 6 bytes, but should they be? Honestly, if the answer is "it depends" then this is just another reason these types suck.

@edgargabriel
Copy link
Member

@hjelmn I don't think we do anything 'special' for these data types, we use whatever description we receive from the datatype engine.

@hjelmn
Copy link
Member Author

hjelmn commented Apr 12, 2018

@edgargabriel Ok, so they get packed like any other non-contiguous datatype. I don't know if that is correct. They are special (and not in a good way) which is what this bug exists. They are composed yet they are predefined. I think the requirement that MPI_Type_get_contents() be illegal for these types is silly. Either the types should go away (which has always been my stance) or that limitation go away.

BTW, I can't run mpi_test_suite in MTT with ompio on my Mac. I get the following error:

IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_CHAR (1/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_UNSIGNED_CHAR (2/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_BYTE (4/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_SHORT (5/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_UNSIGNED_SHORT (6/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_INT (7/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_UNSIGNED (8/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_LONG (9/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_UNSIGNED_LONG (10/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_FLOAT (11/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_DOUBLE (12/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_LONG_DOUBLE (13/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_LONG_LONG (14/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_FLOAT_INT (15/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_DOUBLE_INT (16/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_LONG_INT (17/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_SHORT_INT (18/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_2INT (19/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_LONG_DOUBLE_INT (20/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_CONTIGUOUS_INT (21/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_VECTOR_INT (22/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_HVECTOR_INT (23/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_INDEXED_INT (24/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_HINDEXED_INT (25/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_STRUCT_INT (26/28)
IO tests iread_at (66/101), comm MPI_COMM_WORLD (1/9), type MPI_TYPE_MIX (27/28)
malloc debug: mca_fbtl_posix_ipreadv: error in aio_read(): Resource temporarily unavailable

I wanted to see if I get the same datatype errors with ompio.

@hjelmn
Copy link
Member Author

hjelmn commented Apr 12, 2018

@bosilca Can you add anything here? I am really tempted to open a PR to add explicit padding to the composed predefined datatypes to make them always contiguous. This would fix this issue and the osc/rdma issue with them as well. This would have two side-effects:

  1. The state of the padding bits would become undefined (as is the case with any struct padding bit in C), and

  2. The extent and true extent of these types would match their struct extent. This isn't the case now.

I don't know if that will run into any trouble with the standard though.

@hjelmn
Copy link
Member Author

hjelmn commented Apr 12, 2018

I opened an issue with the collective working group (they own minloc/maxloc) to get some clarification. See mpiwg-coll/coll-issues#3

@bosilca
Copy link
Member

bosilca commented Apr 12, 2018

@hjelmn forcing these datatypes to become contiguous by hiding the gap, will basically prevent any packing and will make the data non portable (as the portability is ensured by the fact the data is packed without gaps).

I think the real issue is that romio assumed that all predefined types are contiguous. For the datatypes you mentionned (* + INT) one can get the extent and size to see if the data are really contiguous and if not act similarly to your proposed patch.

Last thing, in your patch you don't need to create the datatype and then use get content because what you get is exactly what you provided to the MPI_Type_create_struct call, so you can save yourself the trouble and directly copy _types and _disp.

@edgargabriel
Copy link
Member

@hjelmn I will try to look into the issue on Macs (I just need to find somewhere access to one again, since I don't have one in our lab). Note, that a while back I made however the determination, that mpi_test_suite is for various reasons not really well suited for parallel I/O tests. I can describe some if the issues that I had separately if you want, independent of this ticket.

@hjelmn
Copy link
Member Author

hjelmn commented Apr 13, 2018

@bosilca Indeed. I will update my patch off your feedback. Do you think it would make sense to change the standard to allow MPI_Type_get_contents() on the predefined composed types?

hjelmn added a commit to hjelmn/ompi that referenced this issue Apr 13, 2018
romio assumes that all predefined datatypes are contiguous. Because of
the (terribly named) composed datatypes MPI_SHORT_INT, MPI_DOUBLE_INT,
MPI_LONG_INT, etc this is an incorrect assumption. The simplest way to
fix this is to override the MPI_Type_get_envelope and
MPI_Type_get_contents calls with calls that will work on these
datatypes. Note that not all calls to these MPI functions are
replaced, only the ones used when flattening a non-contiguous
datatype.

References open-mpi#5009

Signed-off-by: Nathan Hjelm <[email protected]>
hjelmn added a commit to hjelmn/ompi that referenced this issue Apr 13, 2018
romio assumes that all predefined datatypes are contiguous. Because of
the (terribly named) composed datatypes MPI_SHORT_INT, MPI_DOUBLE_INT,
MPI_LONG_INT, etc this is an incorrect assumption. The simplest way to
fix this is to override the MPI_Type_get_envelope and
MPI_Type_get_contents calls with calls that will work on these
datatypes. Note that not all calls to these MPI functions are
replaced, only the ones used when flattening a non-contiguous
datatype.

References open-mpi#5009

Signed-off-by: Nathan Hjelm <[email protected]>
hjelmn added a commit to hjelmn/ompi that referenced this issue Apr 13, 2018
romio assumes that all predefined datatypes are contiguous. Because of
the (terribly named) composed datatypes MPI_SHORT_INT, MPI_DOUBLE_INT,
MPI_LONG_INT, etc this is an incorrect assumption. The simplest way to
fix this is to override the MPI_Type_get_envelope and
MPI_Type_get_contents calls with calls that will work on these
datatypes. Note that not all calls to these MPI functions are
replaced, only the ones used when flattening a non-contiguous
datatype.

References open-mpi#5009

Signed-off-by: Nathan Hjelm <[email protected]>
@hjelmn
Copy link
Member Author

hjelmn commented Apr 13, 2018

@edgargabriel Is there a better test suite for IO. I need to test the MPI-3 fixes for romio so I can get those committed. After that we can start disabling removed functions by default.

@bosilca
Copy link
Member

bosilca commented Apr 13, 2018

@hjelmn you are right regarding the standard, it needs to be amended. The solution is to allow the user to understand how the possibly non-contiguous types are viewed by the MPI internals. This forces the MPI standard either to imposes how the MPI_*_INT types are defined (which will also allow the users to use it in user-defined ops), or to allow an exception for these types with regard to MPI_Type_get_contents and MPI_Type_get_elements.

hjelmn added a commit that referenced this issue Apr 16, 2018
romio assumes that all predefined datatypes are contiguous. Because of
the (terribly named) composed datatypes MPI_SHORT_INT, MPI_DOUBLE_INT,
MPI_LONG_INT, etc this is an incorrect assumption. The simplest way to
fix this is to override the MPI_Type_get_envelope and
MPI_Type_get_contents calls with calls that will work on these
datatypes. Note that not all calls to these MPI functions are
replaced, only the ones used when flattening a non-contiguous
datatype.

References #5009

Signed-off-by: Nathan Hjelm <[email protected]>
ggouaillardet added a commit to ggouaillardet/ompi that referenced this issue Jun 20, 2018
romio assumes that all predefined datatypes are contiguous. Because of
the (terribly named) composed datatypes MPI_SHORT_INT, MPI_DOUBLE_INT,
MPI_LONG_INT, etc this is an incorrect assumption. The simplest way to
fix this is to override the MPI_Type_get_envelope and
MPI_Type_get_contents calls with calls that will work on these
datatypes. Note that not all calls to these MPI functions are
replaced, only the ones used when flattening a non-contiguous
datatype.

References open-mpi#5009

Signed-off-by: Nathan Hjelm <[email protected]>

(back-ported from commit open-mpi/ompi@4d876ec)

Signed-off-by: Gilles Gouaillardet <[email protected]>
ggouaillardet added a commit to ggouaillardet/ompi that referenced this issue Jun 20, 2018
romio assumes that all predefined datatypes are contiguous. Because of
the (terribly named) composed datatypes MPI_SHORT_INT, MPI_DOUBLE_INT,
MPI_LONG_INT, etc this is an incorrect assumption. The simplest way to
fix this is to override the MPI_Type_get_envelope and
MPI_Type_get_contents calls with calls that will work on these
datatypes. Note that not all calls to these MPI functions are
replaced, only the ones used when flattening a non-contiguous
datatype.

References open-mpi#5009

Signed-off-by: Nathan Hjelm <[email protected]>

(back-ported from commit open-mpi/ompi@4d876ec)

Signed-off-by: Gilles Gouaillardet <[email protected]>
@jsquyres
Copy link
Member

@hjelmn @ggouaillardet I see that the 2 commits (with nearly identical commit messages) were committed to master for this:

I'm not sure what happened there...

Do these need to be cherry-picked to v3.0.x and v3.1.x? I do not see these commits on those branches.

I only ask (vs. just making PRs for them) because I don't know why there's 2 near-identical commits... 🤔

(no need for v4.0.x -- these were done long before v4.0.x branched)

@hjelmn
Copy link
Member Author

hjelmn commented Sep 14, 2018

@jsquyres Different versions of romio?

@jsquyres
Copy link
Member

Boom -- yes, that's the issue; I missed that in the commit messages: ROMIO 3.1.4 vs. 3.2.1.

Looks like we need the v3.1.4 variant for the v3.0.x and v3.1.x branches. I'll go file those now.

jsquyres pushed a commit to jsquyres/ompi that referenced this issue Sep 14, 2018
romio assumes that all predefined datatypes are contiguous. Because of
the (terribly named) composed datatypes MPI_SHORT_INT, MPI_DOUBLE_INT,
MPI_LONG_INT, etc this is an incorrect assumption. The simplest way to
fix this is to override the MPI_Type_get_envelope and
MPI_Type_get_contents calls with calls that will work on these
datatypes. Note that not all calls to these MPI functions are
replaced, only the ones used when flattening a non-contiguous
datatype.

References open-mpi#5009

Signed-off-by: Nathan Hjelm <[email protected]>
(cherry picked from commit 4d876ec)
jsquyres pushed a commit to jsquyres/ompi that referenced this issue Sep 14, 2018
romio assumes that all predefined datatypes are contiguous. Because of
the (terribly named) composed datatypes MPI_SHORT_INT, MPI_DOUBLE_INT,
MPI_LONG_INT, etc this is an incorrect assumption. The simplest way to
fix this is to override the MPI_Type_get_envelope and
MPI_Type_get_contents calls with calls that will work on these
datatypes. Note that not all calls to these MPI functions are
replaced, only the ones used when flattening a non-contiguous
datatype.

References open-mpi#5009

Signed-off-by: Nathan Hjelm <[email protected]>
(cherry picked from commit 4d876ec)
@hppritcha
Copy link
Member

closed via #5710,#5711,#5068

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants