Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend functionallity of assemble! method #968

Open
Joroks opened this issue Jun 1, 2024 · 5 comments
Open

Extend functionallity of assemble! method #968

Joroks opened this issue Jun 1, 2024 · 5 comments

Comments

@Joroks
Copy link

Joroks commented Jun 1, 2024

Currently, assemble! assumes, that Ke is a Matrix. In some situations, the user might want to assemble single values into the matrix. This could look something like this:

function assemble!(a::Ferrite.Assembler{T}, rowdofs::Int, coldofs::Int, Ke::T) where {T}
    push!(a.V, Ke)
    push!(a.I, rowdofs)
    push!(a.J, coldofs)
end

Furthermore, in the Project I'm currently working on, I know that my element matrices have a predictable pattern of zero entries (Because the entry for each node is essentially a diagonal Matrix, I would end up with 2/3 structual zeros in the final assembled Matrix). It might be worthwhile to include a keword argument dropzeros that automatically removes these zero entries:

function assemble!(a::Ferrite.Assembler{T}, rowdofs::AbstractVector{Int}, coldofs::AbstractVector{Int}, Ke::AbstractMatrix{T}; dropzeros::Bool=false) where {T}
    @assert(size(Ke,1) == length(rowdofs))
    @assert(size(Ke,2) == length(coldofs))

    for I in CartesianIndices(Ke)
        dropzeros && Ke[I] == 0 && continue

        (i,j) = Tuple(I)

        push!(a.V, Ke[I])
        push!(a.I, rowdofs[i])
        push!(a.J, coldofs[j])
    end
end

This way, I wouldn't have to call dropzeros on the assembled matrix, which would probably be slightly faster and would allocate less memory during assembly.

If this is something worth including, I can get to work implementing this and creating a pull request. However, this would be my first time contributing to a project and I'd first have to figure out how that acctually works 😅

@fredrikekre
Copy link
Member

fredrikekre commented Jun 1, 2024

Hi, first off, is there a reason you are using the "old" assembler instead of using the newer one based on create_sparsity_pattern? The latter one is already much more flexible and more performant I would say.

For example, with the keyword argument coupling you can specify the coupling between degrees of freedom on the element level. You can use it to specify coupling between different fields (if you have more than one) or different components (if you have a vector-valued problem) or simply dof-by-dof. You can think of the coupling argument as a "template" for the element matrices you will assemble. For example, if you know you only have diagonal entries, and the element matrix is 4x4 you can pass

coupling = [
    true false false false
    false true false false
    false false true false
    false false false true
]

It might be worthwhile to include a keword argument dropzeros that automatically removes these zero entries:

The preferred assembler actually already does this!

Currently, assemble! assumes, that Ke is a Matrix. In some situations, the user might want to assemble single values into the matrix.

I think this would be good addition. For the standard SparseMatrixCSC type it would be equivalent as simply doing

K[row, col] += val

though, but it's good to have a generic function which can be used for all supported matrix types.

@fredrikekre
Copy link
Member

Actually, we already have

"""
addindex!(A::AbstractMatrix{T}, v::T, i::Int, j::Int)
addindex!(b::AbstractVector{T}, v::T, i::Int)
Equivalent to `A[i, j] += v` but more efficient.
`A[i, j] += v` is lowered to `A[i, j] = A[i, j] + v` which requires a double lookup of the
memory location for index `(i, j)` -- one time for the read, and one time for the write.
This method avoids the double lookup.
Zeros are ignored (i.e. if `iszero(v)`) by returning early. If the index `(i, j)` is not
existing in the sparsity pattern of `A` this method throws a `SparsityError`.
Fallback: `A[i, j] += v`.
"""
addindex!

Perhaps this should be called assemble! though? It does the same thing but for a single entry. Works directly on the matrix though and not the assembler.

@Joroks
Copy link
Author

Joroks commented Jun 1, 2024

First of all, I'm currently working with Ferrite v0.3.14 so this might be outdated (from what I've seen the current dev version replaces the FieldHandlers with SubDofHanders which migt be exactly what I'm looking for). I'm also still quite new to Ferrite so I might do things more complicated than needs be.

The project I'm currently working on is also a bit more involved than a pure FE simulation, so I'll give you some context of what I'm trying to accive:
I'm currently working on migrating a programm (written in Matlab) of an existing FE-MD (Molecular-Dynamics) to Julia to a) improve performance and b) avoid the usage of proprietary software. The approach consists of three Fields/Domains:

  1. the pure FE-Displacement Field
  2. an intermediate Lagrange Multiplier Field (LM)
  3. a Field that's interpolated from Anchor Points from the MD simulation

The LM Field uses a subset of the Cells in the FE-Field (those on the FE-MD boundary)

My goal is to assemble two coupling matrices:

  1. between the FE and LM field
  2. between the LM and MD field

The LM-FE coupling matrix has more or less the structure of a regular FE matrix, so I might be able to use the sparcity pattern assembler here.

The LM-MD coupling matrix however, has a structure based on the Anchor Points in the MD-Field that are in the vicinity of the Gauss integration Points of the LM field, which makes creating a sparsity pattern beforehand not worthwhile. (The way you asssemble this matrix is also different from how you'd normally assemble a FE matrix so I'll spare you the details)

This is my current code for the FE-LM coupling matrix:

function assemble_Gcl(FE, LM, β1, β2)
    Gcl = start_assemble()

    cellValues = CellScalarValues(
        QuadratureRule{3, RefCube}(:legendre, 2), 
        Lagrange{3, RefCube, 1}()
    )

    cellFE = CellCache(FE)
    cellLM = CellCache(LM)

    boundaryCells = only(LM.fieldhandlers).cellset

    cellDofs = ndofs_per_cell(LM)
    Gcl_elem = zeros(cellDofs, cellDofs)

    for cell in boundaryCells
        reinit!(cellLM, cell)
        reinit!(cellFE, cell)
        reinit!(cellValues, cellFE)

        fill_Gcl_elem!(Gcl_elem, cellValues, β1, β2)

        assemble!(Gcl, cellFE.dofs, cellLM.dofs, Gcl_elem)
    end

    return  end_assemble(Gcl) |> dropzeros!
end

function fill_Gcl_elem!(Gcl_elem::Matrix, cv::CellScalarValues{dim}, β1::Real, β2::Real) where dim
    fill!(Gcl_elem, 0.0)

    base_functions = 1:getnbasefunctions(cv)

    for gp in 1:getnquadpoints(cv)
        dΩ = getdetJdV(cv, gp)

        for j in base_functions
            rows = local_dofs(dim, j)

            for k in base_functions
                cols = local_dofs(dim, k)

                valueContribution = shape_value(cv, gp, j) * shape_value(cv, gp, k)
                gradientContribution = shape_gradient(cv, gp, j)  shape_gradient(cv, gp, k)

                Gcl_elem[rows,cols] += I*(β1*valueContribution + β2*gradientContribution)*dΩ
            end
        end
    end

    return Gcl_elem
end

local_dofs(dim, basefunction) = (1:dim) .+ dim*(basefunction-1)

FE is a MixedDofHandler that contains only the cells of the FE field
LM is a MixedDofHandler that contains only the cells of the MD field

I haven't found a way to create sparsity pattern for each "sub matrix" (when renumbering FieldWise) of the full system, therefore I'm using two DofHandlers, with the older Assembler.

here's an example of what, the Gcl_elem matrix looks like

24×24 Matrix{Float64}:
  3.84013    0.0        0.0        1.5957     0.0        0.0       …   0.0        0.0       -0.330839   0.0        0.0     
  0.0        3.84013    0.0        0.0        1.5957     0.0          -0.24651    0.0        0.0       -0.330839   0.0     
  0.0        0.0        3.84013    0.0        0.0        1.5957        0.0       -0.24651    0.0        0.0       -0.330839
  1.5957     0.0        0.0        3.84013    0.0        0.0           0.0        0.0       -0.24651    0.0        0.0     
  0.0        1.5957     0.0        0.0        3.84013    0.0          -0.330839   0.0        0.0       -0.24651    0.0     
  0.0        0.0        1.5957     0.0        0.0        3.84013   …   0.0       -0.330839   0.0        0.0       -0.24651 
 -0.330567   0.0        0.0       -0.336772   0.0        0.0           0.0        0.0        0.635398   0.0        0.0     
  0.0       -0.330567   0.0        0.0       -0.336772   0.0           1.59516    0.0        0.0        0.635398   0.0     
  0.0        0.0       -0.330567   0.0        0.0       -0.336772      0.0        1.59516    0.0        0.0        0.635398
 -0.336772   0.0        0.0       -0.330567   0.0        0.0           0.0        0.0        1.59516    0.0        0.0
  0.0       -0.336772   0.0        0.0       -0.330567   0.0       …   0.635398   0.0        0.0        1.59516    0.0
  0.0        0.0       -0.336772   0.0        0.0       -0.330567      0.0        0.635398   0.0        0.0        1.59516        
  1.59516    0.0        0.0        0.635398   0.0        0.0           0.0        0.0       -0.336772   0.0        0.0
  0.0        1.59516    0.0        0.0        0.635398   0.0          -0.330567   0.0        0.0       -0.336772   0.0
  0.0        0.0        1.59516    0.0        0.0        0.635398      0.0       -0.330567   0.0        0.0       -0.336772       
  0.635398   0.0        0.0        1.59516    0.0        0.0       …   0.0        0.0       -0.330567   0.0        0.0
  0.0        0.635398   0.0        0.0        1.59516    0.0          -0.336772   0.0        0.0       -0.330567   0.0
  0.0        0.0        0.635398   0.0        0.0        1.59516       0.0       -0.336772   0.0        0.0       -0.330567       
 -0.24651    0.0        0.0       -0.330839   0.0        0.0           0.0        0.0        1.5957     0.0        0.0
  0.0       -0.24651    0.0        0.0       -0.330839   0.0           3.84013    0.0        0.0        1.5957     0.0
  0.0        0.0       -0.24651    0.0        0.0       -0.330839  …   0.0        3.84013    0.0        0.0        1.5957
 -0.330839   0.0        0.0       -0.24651    0.0        0.0           0.0        0.0        3.84013    0.0        0.0
  0.0       -0.330839   0.0        0.0       -0.24651    0.0           1.5957     0.0        0.0        3.84013    0.0
  0.0        0.0       -0.330839   0.0        0.0       -0.24651       0.0        1.5957     0.0        0.0        3.84013

Also, how close is the current master branch to the next release version? It seems like the things I'm trying to do a more convenient in the newer version so I might refactor my code to use the dev version.

@fredrikekre
Copy link
Member

Okay interesting! The new subdofhandlers are basically the same as the old MixedDofHandler + Fieldhandler together but works a bit more like the regular DofHandler. (If you are curious about this you can watch @kimauth 's video from last years FerriteCon https://youtu.be/r6CVZffKCAg?si=3D69RFCry1pUvFzt

Seems like the old assembler is useful to you in this case so we should keep it around (note that there were some discussion about removing it in #916 (comment)).

It might be worthwhile to include a keword argument dropzeros that automatically removes these zero entries:

This would be a good contribution I think. We can do it unconditionally (no need for a keyword argument) since we already ignore zeros in the other assembler. Do you want to create a pull request? It might be worth waiting until #916 is finalized since that make some changes to the old assembler (for example renames it to COOAssembler).

Also, how close is the current master branch to the next release version? It seems like the things I'm trying to do a more convenient in the newer version so I might refactor my code to use the dev version.

Hopefully close, you can see the milestone'd things here: https://github.com/Ferrite-FEM/Ferrite.jl/milestones/v1.0.0 One thing that could potentially be useful for you is #888 which isn't merged yet. I don't think it is a bad idea to upgrade already now. The changelog have pretty good descriptions of what you need to change, and if you run into trouble that would indicate we need to improve the upgrade guide!

@Joroks
Copy link
Author

Joroks commented Jun 1, 2024

This would be a good contribution I think. We can do it unconditionally (no need for a keyword argument) since we already ignore zeros in the other assembler. Do you want to create a pull request? It might be worth waiting until #916 is finalized since that make some changes to the old assembler (for example renames it to COOAssembler).

I'll create a pull request for this once #916 is finalized then. While I'm at it, I would also overload end_assemble() to allow the user to explicitly specify the size of the resulting matrix. With the way my dofs are ordered currently, there is a large block of zero entries, at the very bottom of the matrix that will get cut off by the assembler. So I currently have to hack the assembler to get the result I want:

return sparse(Gcl.I, Gcl.J, Gcl.V, Ferrite.ndofs(FE), Ferrite.ndofs(LM)) |> dropzeros!

grafik

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants