Collapse to Function vs. Collapse Nodes

Hi,

wouldn’t it be bettter to Collapse Nodes instead of Collapse to Function when not planning to reuse or access from outside the BP? I know it does not make any difference here in performance, but it might do in bigger projects.

We are saving a call to a function through its pointer. I know it sounds like nitpicking, but I just wnat to clarify when to use what.

Thanks in advance!

1 Like

A valid point. Functions do give some structure to your code rather than have massive sprawling blueprints and they do get big in a hurry. Having your code in functions helps debug as well if you have an issue. I came across a project recently where there was a bug in a function and I tracked it down by detaching calls at each stage.

If you have groups of nodes, this makes it more tricky to reconnect (not impossible) but I was able to track it down within a few hours (it was not my project or code so good going really).

It really does depend. Functions also don’t have to be reusable but collapsed nodes are not regardless of whether or not they could be.

Thanks for you answer.

A valid point. Functions do give some structure to your code rather than have massive sprawling blueprints and they do get big in a hurry. Having your code in functions helps debug as well if you have an issue. I came across a project recently where there was a bug in a function and I tracked it down by detaching calls at each stage.

The call stack trace is a good point for using functions. But the structuring spawling BPs is something you can accomplish by both methods i.e. all three (+Macros).

However, when would the be a usecase for node collapsing over functions? I could think of when the three conditions are met:

  • No plans to reuse
  • Small groups of code
  • Many executions (loop, tick-event etc.) → save function calls.

Maybe it’s really not that important, cause if perfomance matters, you’d rather implement it in C++ anyways.

1 Like

No plans to reuse - I love this one, and small groups of code. This was the case with some legacy projects the company I work for have now which ended up with 15 copies of identical code in a single project because it was easier than refactoring because they had no need to reuse when it was first written and also sloppy coding.

I have a rule, just because ____ doesn’t mean ____ and it’s my third rule of coding. Just because you have no plans to reuse doesn’t mean you won’t end up reusing. The key is no plans. Plans change.

As for loops and efficiency, do it right, not do it fastest. Think about maintainability over a few ms here and there. It if is an issue later, like you said, use C++ over blueprint.

If it were C++, a good rule of thumb is 4-5 lines of code repeated needs a function and if your code exceeds about 2 screens worth, it probably needs to be tidied up, i.e. functions. collapsing code blocks are not the same as nice code.

1 Like

I fully agree that code should be maintainable. From that point of view it seems functions/macros are almost always to be prefered and that I could completely give up on collapsing nodes.Thanks for sharing your experience.

1 Like

It does also allow for unit testing - you can test your functions and macros independently and ensure they work as intended.

1 Like

I like test driven development. However, is there a practical way doing so with BPs? I also think that function which generate graphics, are directly connected to GUI input, or do complex operations in 3d are very hard to test. Sorry for the offtopic btw.

Yes it is tricky. I’m not sure if I have a good answer for this but there are plenty of functions that you might author that could be tested such as numerical calculations.

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.

Privacy & Terms