Our Criteria

The framework we apply to every platform analysis — consistently, without exception. Understanding our criteria is understanding the limits and value of what we publish.

The problem with platform-defined metrics

Every real estate crowdlending platform in Argentina publishes information about itself. The problem is not a lack of information — it is that each platform decides which information to publish, how to define the metrics it uses, and which aspects of its history to foreground or minimize.

A platform might report "return rates" using a definition that differs from another platform's "return rates." One might count a project as "completed" at a different stage than another. Timelines might be measured from different starting points. Without a shared framework, these differences make comparison meaningless.

Our criteria exist to establish a common language — not to evaluate which platform is better, but to ensure that when we describe what each platform does, we are describing it in terms that are comparable across platforms.

Research documents and editorial criteria materials on a desk

What we examine in every analysis

These five dimensions are applied identically to every platform. No dimension is weighted above another — each is examined on its own terms.

Legal Structure

We identify and document the legal entity type through which the platform operates and facilitates investor participation. This includes the regulatory framework applicable to that structure under Argentine law, what protections (if any) the structure affords participants, and whether the legal documentation is publicly accessible in a meaningful form.

Project Track Record

We document the ratio of announced projects to publicly verifiable completed projects. We note how the platform defines "completed," what documentation it provides upon project closure, and how it handles projects that were announced but did not proceed as described. We do not make inferences about projects we cannot verify.

Timeline Compliance

Where original projected durations are publicly documented alongside actual completion dates, we compare them. We also examine how platforms communicate when timelines shift — whether proactively and with explanation, or only when directly questioned. Consistency of timeline estimates across multiple projects is noted.

Accountability Transparency

We examine the depth and regularity of public reporting on active projects. This includes how financial information is presented, whether historical reports remain accessible, and what level of detail is provided to participants versus what is reserved for internal use. We assess this based solely on observable, public-facing behavior.

Information Accessibility

We evaluate how much substantive information about a platform's operations, fee structure, project details, and legal documentation is available without requiring registration, a phone call, or prior financial commitment. The premise is that a prospective participant should be able to form a meaningful preliminary view from public information alone.

Documented Gaps

Every analysis explicitly notes what we could not verify from publicly available information. We treat information gaps as data points in their own right. A platform that makes it difficult to access basic operational information is itself a finding worth documenting — regardless of what the available information says.

The boundaries of our editorial scope

Being clear about what we do not do is as important as describing what we do. Our editorial scope has deliberate limits — not because the excluded activities are unimportant, but because exceeding these limits would compromise the independence and comparability that define our work.

We do not rank platforms against each other. Ranking implies a judgment about relative quality that goes beyond what standardized comparison of public information can support. Two platforms might score differently on individual dimensions while being equally appropriate or inappropriate for a given participant's circumstances — circumstances we cannot and should not assess.

We do not recommend participation in any platform. Editorial analysis of publicly available information is not a substitute for the due diligence that any individual should conduct before committing capital to any investment vehicle.

We do not accept advertising, sponsorship, or any form of commercial relationship with platforms we analyze. This is not a policy we revisit periodically — it is a structural feature of how Kelvune operates.

Explicit limitations

  • We do not rank platforms by quality or desirability
  • We do not recommend participation in any platform
  • We do not assess returns or project financial performance
  • We do not use information provided directly by platforms
  • We do not accept compensation from analyzed entities
  • We do not make predictions about future platform behavior
  • We do not provide personalized investment guidance

See how our criteria translate into practice

Our methodology section describes the specific process through which we gather, organize, and publish our analyses — from data collection to editorial review.

View our methodology