Recent advances in the field of Natural Language Processing, specifically in Natural Language Generation (NLG) towards Dialogue Systems have focused mainly on two-party conversations. However, group conversations or multi-party conversations (MPC) are just as prevalent in our everyday lives. While the area of multi-party conversation modeling has received some attention in recent times, MPC lacks resources for 1) corpora in differing settings (formal/informal, synchronous/asynchronous), 2) dialogue models which can participate in informal open-domain settings while maintaining speaker information, and 3) evaluation metrics which provide better insights into the performance of MPC models when it comes to operating in groups and interacting with multiple participants. We thus take a three-pronged approach towards contributing to research in the MPC modeling research area. For corpora collection, we contribute a mock social media tool that can be utilized for collecting asynchronous MPC conversations called Community Connect and utilize it for 3 experiments to collect everyday talk. For MPC modeling, we propose a response generation model, using large language models (LLMs) and graph structured networks, which is capable of taking participant relations into account towards maintaining multiple persona profiles and generating responses keeping the speaker characteristics in mind and responding accordingly. Lastly, for MPC evaluation, we present an expansion to the taxonomy of errors which specifically contributes MPC-specific metrics to the overall NLG errors. In addition to the taxonomy, we contribute to better evaluation standards across which progress in the tasks within MPC can be tracked more saliently. Through these contributions, we aim to fill the necessary gaps towards advancing MPC understanding and modeling, while also providing the tools to gauge progress until now.